How Large Language Models Work: Meaning, Context, and Attention with Practical Insights

Human AI Interaction showing how meaning, context, and attention work in large language models

  Effective Human AI Interaction determines how meaning and attention remain stable across complex multi-turn exchanges. Human–AI Interaction: A Practical, Measurable Framework for Meaning Production and Error Control By Soheila Dadkhah Introduction Human AI Interaction is a central factor in how large language models generate meaning, manage context, and align attention in real-world use. Large […]

How Large Language Models Work: Meaning, Context, and Attention part1

Large Language Models

The Operational Language of Large Language Models By Soheila Dadkhah 1. Introduction Large Language Models (LLMs) have emerged as one of the most influential developments in contemporary artificial intelligence research. Their rapid progress has transformed the landscape of natural language processing, enabling machines to perform tasks that were previously considered to require deep linguistic understanding, […]

Ishkar Artificial Intelligence

Ishkar Artificial Intelligence

An Academic Article Based on Standard Scholarly Writing and Google-Indexed Academic Guidelines By Soheila Dadkhah Abstract This article introduces Ishkar Artificial Intelligence as an interaction-oriented linguistic intelligence framework and presents its theoretical foundations, design logic, applied techniques, and evaluation criteria. The text follows organizational conventions commonly used in academic and research reporting and draws on […]