Architectural Gaps in Generative AI: Quantifying Cognitive Risks for Safety Applications

<b>Background</b>: The rapid integration of generative AIs, such as ChatGPT, into industrial, process, and construction management introduces both operational advantages and emerging cognitive risks. While these models support task automation and safety analysis, their internal architect...

Full description

Saved in:
Bibliographic Details
Main Authors: He Wen, Pingfan Hu
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/6/7/138
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<b>Background</b>: The rapid integration of generative AIs, such as ChatGPT, into industrial, process, and construction management introduces both operational advantages and emerging cognitive risks. While these models support task automation and safety analysis, their internal architecture differs fundamentally from human cognition, posing interpretability and trust challenges in high-risk contexts. <b>Methods</b>: This study investigates whether architectural design elements in Transformer-based generative models contribute to a measurable divergence from human reasoning. A methodological framework is developed to examine core AI mechanisms—vectorization, positional encoding, attention scoring, and optimization functions—focusing on how these introduce quantifiable “distances” from human semantic understanding. <b>Results</b>: Through theoretical analysis and a case study involving fall prevention advice in construction, six types of architectural distances are identified and evaluated using cosine similarity and attention mapping. The results reveal misalignments in focus, semantics, and response stability, which may hinder effective human–AI collaboration in safety-critical decisions. <b>Conclusions</b>: These findings suggest that such distances represent not only algorithmic abstraction but also potential safety risks when generative AI is deployed in practice. The study advocates for the development of AI architectures that better reflect human cognitive structures to reduce these risks and improve reliability in safety applications.
ISSN:2673-2688