What Does the AI Ecosystem Look Like? You Need AI, Does the AI Need You?
- Jan 21
- 9 min read
Artificial Intelligence (AI) is the defining technology of this decade. It has captured headlines all over the world, and we see breakthroughs every day.
Did you know that time and again, scholars have talked about the relationship between technology and society? Literature suggests that technology never develops in silos; it shapes society, the way we behave and interact with it, and sometimes it even reinforces societal inequalities.

Similarly, AI’s evolution is not merely a technical phenomenon but an economic, social, and ethical transformation.
Mapping the AI Ecosystem: A Multi-Layered Machine
Let us start by understanding the AI Ecosystem.
The AI ecosystem is a globally dispersed, multi-layered value chain that stretches from the rare earth mines that yield its hardware to the end user applications on your phone. Understanding this structure is key to assessing both AI’s economic contribution and its areas of risk.
The Five Layers of the AI Ecosystem

The AI value chain can be broken down into five layers:
1. Foundation and Research Layer:
This layer is like the mind of the AI. It comprises academic institutions, research labs (both corporate and government), big technology companies, and open-source communities that develop the fundamental algorithms and models through techniques like machine learning, deep learning, reinforcement learning, and training Generative AI. These institutions are the ones that understand the core capabilities and limitations of the technology.
2. Infrastructure and Hardware Layer:
This layer is like the body of the AI. It provides the massive computational resources necessary to train and deploy complex AI models. Key components of this layer include specialised silicon and the data centres. Training a single large language model (LLM) can emit as much carbon as five cars in their lifetime. Thus, this layer is actually responsible for huge carbon emissions emerging from the data centres as well as water usage for data centre server cooling.
Graphics Processing Units (GPUs) are highly effective, specialized silicon for AI because they have thousands of small cores that can perform many calculations simultaneously. This makes them perfect for the repetitive, parallel calculations in neural network training. GPUs are complex microchips that are built almost entirely out of semiconductors. The massive number of transistors on these chips allows them to perform parallel processing. This ability is what makes GPUs and TPUs (Tensor Processing Units) far more efficient than standard CPUs for the matrix algebra required by deep learning. Interestingly, they were initially designed for rendering video game graphics.
It is also worth noting here the critical use of rare earth minerals. Rare earth minerals are indispensable to the AI hardware layer, functioning as high-performance materials in multiple components. Elements like lanthanum and yttrium are used to engineer faster, more power-efficient AI chips by improving transistor performance. Further, elements like neodymium and dysprosium create the powerful magnets essential for data center cooling systems and motors in robotics, enabling the physical operation and scale of the AI ecosystem. The International Energy Agency forecasts that the extraction of critical minerals will need to increase 400% by 2040 due to AI, digital, and renewable technologies.
Data Centres
Data Centres are the physical locations that house, power, and cool the vast network of servers containing the specialized silicon. They provide the necessary environment - stable power supply, high-speed networking, and immense cooling systems to operate thousands of GPUs and TPUs continuously. Companies like Amazon Web Services (AWS), Microsoft Azure, etc., offer the computational scale needed for AI development. Instead of individual companies having to build their own massive data centres, they can rent exactly the resources they need on demand from the cloud, enabling faster development and deployment of AI models globally.
These centres require immense amounts of energy and water for cooling, making this layer the primary source of AI’s environmental footprint.

3. Data Layer:
Data has been called the fuel of this century. This layer involves the collection, cleaning, labelling, storage, and management of the vast datasets needed for training. The quality and diversity of this data are important as biases embedded here are inherited by the resulting AI systems. Stakeholders in this chain include data brokers, annotation services, and organizations creating large, sector-specific data repositories. This layer directly connects to ethical concerns around privacy, security, and bias.

4. Development and Tools:
This layer consists of the software frameworks, platforms, and Application Programming Interfaces (APIs) that make AI usable for a wider audience. It sits immediately on top of the Infrastructure Layer. This process starts with specialized, open-source programming tools. The two most popular are PyTorch and TensorFlow for their specific offerings. To make things simpler, major tech companies offer Platform-as-a-Service (PaaS), which are cloud platforms that let a developer rent ready-made AI capabilities or manage their whole project without having to deal with the complex servers. This layer also includes Machine Learning Operation Tools (MLOps), which are a set of practices. MLOps ensures that once a model is built, it can be automatically deployed, tracked, and monitored. This monitoring detects when a model starts to make bad predictions and triggers an automated update. A lot of conversation around Agentic AI monitoring, human in the loop, or AI certification systems is centred around MLOps.
5. Application and Services:
The final step is the Applications and Services Layer, which is the layer you actually interact with every day. This is where AI delivers its value by being built directly into consumer and business products across all industries. For immediate consumers, this layer may include the obvious Generative AI tools like ChatGPT that writes content, the voice assistant that answers your questions, etc. AI’s Enterprise Applications focus on optimizing internal business processes, such as using predictive maintenance in manufacturing to prevent equipment failure, demand forecasting in retail and supply chain to manage inventory, and assisting in faster diagnostics and drug discovery in healthcare. Meanwhile, the Public Sector uses AI to improve governance and citizen services through Smart City initiatives like optimizing traffic flow, using predictive modelling in public health to track and manage disease spread, and automating vast amounts of government document processing with NLP tools.
All these processes and uses make us question: Given the resources that go into building this system, what do its economic returns look like?
How Much Does AI Give Back?
The economic impact of AI is often termed monumental. MIT Institute Professor Daron Acemoglu recently predicted that AI might have non trivial but modest impact on GDP in the next decade. He predicts that AI’s impact on U.S. GDP is likely to be around 1.1% over the next decade, which is significantly lower than the $7 trillion increase predicted by some others. This conservative estimate is derived from the finding that while many tasks are technically exposed to AI, only about 5% of all tasks economy-wide are expected to be profitably performed by AI due to high implementation costs in other cases. Prof. Daron suggests that to unlock AI’s significant potential, its current development must undergo a fundamental reorientation. Industry’s focus on creating general human-like conversational tools and large foundation models is misplaced. Instead, developers should prioritize building AI tools that provide reliable, context-dependent, and real-time information to increase the marginal productivity of workers in problem-solving professions where AI is currently absent, such as electricians, nurses, plumbers, and educators.
Beyond the economic impact, which is in itself a conversation that warrants more research, it is also time we talk about the ethical debate around AI.

Talking about ethics makes us think: Can Machines exhibit moral values and ethics?
Well, research from Science, Technology, and Society offers great insights into this. For instance, Latour, in his work on missing masses, talks about how objects like closed doors and seat belts are the missing masses of sociology. They enforce rules, discipline, and ethical behaviour far more reliably than human moral codes or social pressure alone, effectively making certain actions mandatory or impossible. Engineers and designers delegate human tasks to nonhuman objects (a door closer replaces a human door operator). In doing so, they inscribe their own values, intentions, and power into the artifact, allowing them to act at a distance by silently prescribing the user’s actions. Thus, this prescribed user is built upon the narrative that the makers of the technology intended to build. This leads to the exclusion of communities that deviate from the idea of this prescribed user. This is precisely how algorithmic bias works.
To achieve true inclusion and challenge this centralized power, co-creation with diverse global communities is essential to shift the technology’s authorship, ensuring its embedded rules reflect a broader range of global moralities and practices.
The ethical concerns around AI are not limited to algorithmic bias or its environmental impact. They also include matters like privacy and data protection due to mass collection and surveillance, questions of accountability and liability when autonomous systems cause harm, the risk of exacerbating economic inequality and mass job displacement caused by automation, the opacity and lack of transparency in complex models, making them difficult to audit or explain, as well as the need for safety and alignment to prevent unintended or catastrophic failures.
There are also important concerns surrounding unethical data labelling practices.

Unethical data labelling practices pose significant risks to both the people involved and the resulting AI systems. These practices often involve exploitative labor, where human annotators are subjected to low pay, demanding quotas, and poor working conditions without adequate support. This is often termed as ghost work. Furthermore, unethical labelling can introduce or exacerbate algorithmic bias, either intentionally or through negligence, by using non-representative, prejudiced, or stereotype-reinforcing labels and datasets. Overworked employees are often not able to work diligently, which may result in neglect. This not only leads to unfair or discriminatory AI outcomes - such as biased facial recognition or loan approval systems but also raises serious questions about data privacy and consent.
Fortunately, initiatives specifically targeting ethical data labelling practices have gained momentum. For instance, in India, Karya specializes in creating high-quality, culturally sensitive AI datasets and services (like LLM evaluation and NLP) using a scalable, pan-India network. They place data annotation as an economic opportunity by paying rural Indian contributors 20 times above the minimum wage. They have also developed a Public Data License that gives workers de facto ownership of the data they create, allowing them to earn additional royalties whenever that data is resold to clients.
Let’s talk about some cases where the use of AI has resulted in actual harm.
Perhaps one of the most influential is the investigation ProPublica launched against the COMPAS algorithm. COMPAS is a criminal risk assessment tool used in U.S. courts to predict who might reoffend. Upon investigation, it was found that the tool was biased against Black defendants. It often labelled them high risk even when they did not reoffend, and white defendants low risk even when they did. The accuracy of the system was only about 61% correct, and with a lack of transparency about its algorithm, it effectively led to racial bias. This is built on the premise that technology is not neutral; if the data or design reflects social bias, the algorithm will reproduce it.
On the safety front, very recently, Tesla has been involved in several safety-related accidents. Tesla’s Autopilot and the more advanced Full Self Driving Beta have been under scrutiny due to numerous crashes. Key causes of accidents include the system failing to recognize stationary objects at high speeds; inadequate driver monitoring leading to misuse; the vehicle engaging in unsafe behaviour like running red lights; driving on the wrong side of the road; or phantom braking, leading to several serious injuries and reported fatalities.
This is where the need for human-in-the-loop and continuous auditing of these systems comes in. These practices must be embedded in the functioning of the AI system. They must form part of what one may call – “The AI’s Manual”. The idea is to have a standardised AI governance framework in place.
Human in the Loop – What is it, how does it work?
Human in the loop (HITL) as a concept involves collaboration where human judgment and expertise are integrated into the lifecycle of an AI or Machine Learning system. HITL can be implemented at various stages of the AI lifecycle. During the training phase, humans provide labelled data to the AI. During the Deployment Phase, human experts intercept and review AI outputs that have either a low confidence score, where the model is not sure of its prediction, or involve high-stakes decisions.
To understand the difference, consider the need to interfere with an AI-integrated washing machine and AI integrated self-driving car. The stakes are higher for the car, warranting enhanced oversight.

Many tools are available to implement this process. Workflow Orchestration Tools, such as platform services like Google Cloud Vertex AI, Labelbox, or LangGraph for Large Language Models, provide the necessary infrastructure to build and manage the continuous human AI feedback cycle. To maximise efficiency and minimise human effort, Active Learning Libraries are also used. These software packages implement intelligent query strategies, like uncertainty or diversity sampling, to ensure human reviewers are only presented with the most valuable and informative data points to label. Custom User Interfaces are designed to clearly present the AI’s output, confidence level, and rationale, enabling a human reviewer to quickly and clearly make an informed judgment, correction, or override.
However, there are challenges that remain and must be addressed. Scholars explain that the challenges in HITL systems revolve around human factors and the resulting methodological difficulties. Although the integration of human expertise is beneficial, it introduces issues like bias, subjectivity, and inconsistency, which can degrade model performance. Furthermore, involving specialized human knowledge makes the process complex, costly, and difficult to scale with the ever increasing volume of data. There is a need to further scale the quality of human feedback, increased specialising training, as well as research on human agency and biases.
Concluding Remarks
Looking at AI from a broader socio-technical perspective is important. Unless human-centred inclusion practices are involved in the very making of this technology at every step, we are already at risk of AI simply automating the inequalities of the past. Looking beyond this requires a holistic approach and collaboration from various stakeholders, as well as ultimately, the end consumer of AI across diverse cultures and demographics.


![[Changemaker Series] Interview With Katrin Fritsch: Digital Decolonization, Big Tech Monopolies, and Systemic Sustainability](https://static.wixstatic.com/media/b7f36a_337d642d9a644f3a84e8c7f785faa6ae~mv2.png/v1/fill/w_980,h_980,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/b7f36a_337d642d9a644f3a84e8c7f785faa6ae~mv2.png)
![[Changemaker Series] Digital Lives, Real Costs: Why Sustainability Must Account for Labor, Water, and Community Displacement - An Interview with Dr. Tamara Kneese](https://static.wixstatic.com/media/b7f36a_96e2a2cfd9ec43cebc3e42a319a532cc~mv2.png/v1/fill/w_980,h_980,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/b7f36a_96e2a2cfd9ec43cebc3e42a319a532cc~mv2.png)
Comments