Authors :
Nandha Sath Niyog
Volume/Issue :
Volume 10 - 2025, Issue 6 - June
Google Scholar :
https://tinyurl.com/mprhjn2k
DOI :
https://doi.org/10.38124/ijisrt/25jun130
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
This study introduces the NSN (NANDHA SATH NIYOG) Humanophotogrammetry Behavioural Model, a novel
framework integrating 3D motion capture, machine vision, and cognitive neuroscience to quantify perceptual error (ΔP) in
behaviour observation. Grounded in phenomenology (Merleau-Ponty, 1945) and embodied cognition (Varela et al., 1991),
the model distinguishes digital error (εd: hardware limitations) from temporal illusion (εt:
neurocognitive latency). A pilot study (N = 10) recorded participants during baseline and stress tasks using
stereophotogrammetry (60fps) and synchronized EEG.
Results revealed:
ΔP ranges of 350–500 m s under stress (22%time dilation vs. objective timestamps, *p* < .05),
16% gesture misclassification in high-motion frames (εd), and
There was a 31% improvement in intent-action alignment after correcting Photo Auto Perception (PAP).
The findings empirically validate that perception is time bound, challenging classical behaviourism. Applications span
clinical diagnostics (e.g., anxiety via micro-expression latency) and human-AI interaction (temporal synchrony calibration).
The study advances interdisciplinary dialogue by formalizing perceptual error as ΔP = εd + εt,
bridging psychology, computer vision, and philosophy of mind.
This paper introduces Humanophotogrammetry, a behavioural model quantifying human actions through
photogrammetric data, anchored in the Theory of Photo Auto Perception (PAP). PAP posits that "accuracy of perception is
the methodological error in data and illusion of reality of biological time sense", challenging classical psychophysical
assumptions. We present a framework where behavioural metrics (e.g., gaze, posture) are extracted via 3D imaging and
machine perception, then mapped to cognitive states. Clinical diagnostics and human-robot interaction applications are
discussed, with validation pathways addressing PAP’s implications for empirical realism.
Highlights
Introduces Photo Auto Perception (PAP) theorem linking phenomenology and machine vision.
Quantifies perceptual error (ΔP) via EEG photogrammetry synchronization.
Demonstrates a 22% time-dilation effect under stress.
Open-source tools (Open Pose, Blender) enhance reproducibility.
Keywords :
Perceptual Error, Embodied Cognition, Temporal Illusion, Humanophotogrammetry, Phenomenology.
References :
- Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin. Merleau-Ponty, M. (1945). Phenomenology of perception. Gallimard. Varela, F., Thompson, E., & Rosch, E. (1991). The embodied mind. MIT Press.
- American Society for Photogrammetry and Remote Sensing. (2019). ASPRS definition of photogrammetry. 16
- Gherardi, R., et al. (2012). Structure from Motion algorithms for feature comparison in image sequences. Computer Vision and Pattern Recognition. 16
- Kersten, T. P., et al. (2015). Accuracy comparison of laser scanning vs. photogrammetry for heritage documentation. Remote Sensing, 7(11), 15770–15797. 1216
- Nicolae, C., et al. (2014). Challenges in photogrammetric measurement of featureless surfaces. ISPRS Journal of Photogrammetry and Remote Sensing, 98, 227–238. 16
- Toldo, R. (2013). Robust point cloud generation from multi-view imagery. Photogrammetric Engineering & Remote Sensing, 79(4), 241–256. 16
- Gonzalez-Romo, N. I., et al. (2023). Anatomic depth estimation and 3D reconstruction in neurosurgery using monoscopic photogrammetry. Operative Neurosurgery, 24(4), 432–444. 14
- de Oliveira, A. S., et al. (2023). *Guidelines for high-quality 3D neuroanatomy models using photogrammetry*. Anatomical Sciences Education, 16(6), 870–883. 14
- Aydin, S. O., et al. (2023). 3D modeling and augmented reality in microsurgical neuroanatomy training. Operative Neurosurgery, 24(3), 318–323. 14
- Rehany, N., et al. (2017). Photogrammetry vs. laser scanning for craniofacial deformity assessment. Journal of Neurosurgical Sciences, 61(5), 531–539. 12
- Lacko, J. (2020). Photogrammetry and photorealistic 3D models in education: Psychological implications. Frontiers in Education, 5, 144. 1216
- Stanford Division of Clinical Anatomy. (2018). Photogrammetry of cadaveric specimens in anatomy education. Journal of Medical Education Research, 5(2), 2382120518799356. 6
- Chamilothori, K., et al. (2019). 360° photography vs. interactive 3D models in spatial learning. Virtual Reality, 23(4), 369–384. 16
- Li, Y., et al. (2024). Visual perception systems for humanoid robots: State estimation and environmental interaction. Robotics and Autonomous Systems, 175, 104681. 4
- Nguyen, T., et al. (2019). Photogrammetry for autonomous vehicle navigation. IEEE Transactions on Intelligent Transportation Systems, 20(12), 4374–4386. 16
- Cultural Heritage & Low-Cost Solutions
- Bologna University Team. (2021). *A photogrammetry-based workflow for 3D museum asset visualization*. Remote Sensing, 13(3), 486. 15
- Jackson, S. (2021). Low-cost body scanning rigs using Canon EOS cameras. Canon Professional Stories. 11
- Fraunhofer Institute. (2020). Cultlab3D: Automated digitization for heritage preservation. Journal of Cultural Heritage, 45, 1–13. 15
- Zhang, Q., et al. (2024). Deep learning in remote sensing image classification: A survey. Neural Computing and Applications, 36(24), 16727–16767. 8
- Paoletti, M. E., et al. (2019). Deep learning frameworks for hyperspectral image classification. IEEE Geoscience and Remote Sensing Magazine, 7(3), 22–43. 8
- Cheng, G., et al. (2019). Scene classification benchmarks using CNNs and GANs. ISPRS Journal of Photogrammetry and Remote Sensing, 148, 1–17. 8
This study introduces the NSN (NANDHA SATH NIYOG) Humanophotogrammetry Behavioural Model, a novel
framework integrating 3D motion capture, machine vision, and cognitive neuroscience to quantify perceptual error (ΔP) in
behaviour observation. Grounded in phenomenology (Merleau-Ponty, 1945) and embodied cognition (Varela et al., 1991),
the model distinguishes digital error (εd: hardware limitations) from temporal illusion (εt:
neurocognitive latency). A pilot study (N = 10) recorded participants during baseline and stress tasks using
stereophotogrammetry (60fps) and synchronized EEG.
Results revealed:
ΔP ranges of 350–500 m s under stress (22%time dilation vs. objective timestamps, *p* < .05),
16% gesture misclassification in high-motion frames (εd), and
There was a 31% improvement in intent-action alignment after correcting Photo Auto Perception (PAP).
The findings empirically validate that perception is time bound, challenging classical behaviourism. Applications span
clinical diagnostics (e.g., anxiety via micro-expression latency) and human-AI interaction (temporal synchrony calibration).
The study advances interdisciplinary dialogue by formalizing perceptual error as ΔP = εd + εt,
bridging psychology, computer vision, and philosophy of mind.
This paper introduces Humanophotogrammetry, a behavioural model quantifying human actions through
photogrammetric data, anchored in the Theory of Photo Auto Perception (PAP). PAP posits that "accuracy of perception is
the methodological error in data and illusion of reality of biological time sense", challenging classical psychophysical
assumptions. We present a framework where behavioural metrics (e.g., gaze, posture) are extracted via 3D imaging and
machine perception, then mapped to cognitive states. Clinical diagnostics and human-robot interaction applications are
discussed, with validation pathways addressing PAP’s implications for empirical realism.
Highlights
Introduces Photo Auto Perception (PAP) theorem linking phenomenology and machine vision.
Quantifies perceptual error (ΔP) via EEG photogrammetry synchronization.
Demonstrates a 22% time-dilation effect under stress.
Open-source tools (Open Pose, Blender) enhance reproducibility.
Keywords :
Perceptual Error, Embodied Cognition, Temporal Illusion, Humanophotogrammetry, Phenomenology.