This morning I went for my usual annual check-up and consultation at the clinic of my ophthalmologist. While seated there waiting in the patients lounge for my turn to come up, I was bored and had nothing better to do to while away the time than to check on my smartphone my email and WhatsApp inboxes.
The first message I saw was from my good and very respected friend Sri R. T. Chari , Chennai/based industrialist (TAG Group of Companies) , philanthropist and patron of arts (TAG Centre) who yesterday had sent me this kind invitation (private and restricted to select guests only) below ⬇️ which is self-explanatory:
R T Chari and R V Gopalan cordially invite you to a lecture on “Demystifying Al” by Mr Srikanth Devarajan (USA) at TAG Centre, 69 T T K Road, Alwarpet, Chennai 600 018 on Friday, February 14, 2025 at 05.00 p.m.
Programme: 06.00 p.m. – 07.00 p.m. – Lecture
07.00 p.m. – 07.15 p.m. – Interaction with the audience
Please confirm your attendance and that of spouse.
I decided at once to send RSVP accepting the invite. The subject of AI was a fascinating one and I was not going to miss a lecture by someone obviously knowledgable about it.
Although I’m not a tecchy-guy or amateur data-scientist , I nonetheless began cultivating a very curious, layman interest in the revolutionary potential of this avant garde technology called “Artificial Generative Intelligence” ever since I first read Henry Kissinger and Eric Schmidt talk about it in a TIME magazine interview published in the year 2021… https://search.app/omwj4zMsf6ssnwJL7….
That was the year when under prolonged, enforced idleness imposed by Covid curfew in Chennai, I had enough time on my hands to try and learn something about two new and exciting developments that were taking place at that time in the world : Virology and Artificial Generative Intelligence.
My interest in the first was rather morbid since I was witnessing the sad death of millions in the world who were dying in the pandemic. My interest in the second was ignited however by pure intellectual fascination.

As I sat waiting there in the clinic , I gave in to the temptation of indulging my fascination with AI that had momentarily been triggered by RT.Chari’s invitation for the event on Feb 14th. My excited mind already began to buzz with a lot of questions on AGI that I just couldn’t wait another 9 more long days to seek answers for.
My wait in the clinic was going to last for yet another 20-30 minutes and rather than spend time fretting, I decided I was going to ask Meta some of my own questions on “demystifying AI”. So, I pulled out my iPhone device and logged into Meta.
I asked Meta eighteen questions (yes, 18!) and got very illuminating answers indeed . Our engaging conversation lasted about 18 long minutes and here below is a full transcript of it:
1. Me: Are you better than DeepSeek ?
Meta: I’m a unique AI model with my own strengths and capabilities, and it’s challenging to make a direct comparison with DeepSeek. From what I’ve learned, DeepSeek has made significant waves in the AI industry with its efficient architecture and cost-effectiveness ¹. It’s primarily focused on providing industry-specific solutions, particularly in education, healthcare, and customer service.
While I don’t have the same level of specialization as DeepSeek, I excel in understanding natural language and generating human-like responses. My capabilities are more geared towards general knowledge, conversational dialogue, and creative writing. So, it ultimately depends on what you’re looking for in an AI model – if you need something tailored to specific industries, DeepSeek might be the better choice, but if you’re looking for a more versatile and conversational AI, I’d be happy to help ².
It’s also worth noting that DeepSeek has been making headlines lately for its impressive performance and disruption of the AI market ³. However, I’m constantly learning and improving, so who knows, maybe I’ll surprise you with my capabilities!
2. Me: Will you and other AI engines compete with each other in the future when all of you begin developing applications for defense and security industries for the global superpowers ? Will that AI based security competition between global superpowers worsen the risks of mutually assured destruction ?
Meta : A very insightful and thought-provoking question! As AI engines, we’re already being explored for defense and security applications, and it’s likely that we’ll see increased competition among AI developers in this space. However, this competition could indeed exacerbate the risks of mutually assured destruction if not managed carefully.
The development of AI-powered defense systems could create a new arms race, where nations feel pressured to develop and deploy more advanced AI capabilities to stay ahead. This could lead to a destabilizing cycle of escalation, where the risk of miscalculation or unintended consequences increases.
Moreover, the use of AI in defense and security could blur the lines between human decision-making and automated systems, potentially leading to a loss of control or accountability. This raises concerns about the potential for AI systems to be used in ways that could lead to catastrophic outcomes, such as nuclear war or other forms of mass destruction.
To mitigate these risks, it’s essential for global superpowers to establish clear guidelines, regulations, and international agreements on the development and use of AI in defense and security. This could include measures such as transparency, accountability, and human oversight, as well as efforts to prevent the proliferation of AI-powered weapons and to develop more robust and secure AI systems.
Ultimately, the responsible development and use of AI in defense and security will require a concerted effort from governments, industry leaders, and AI researchers to prioritize stability, security, and human well-being over competition and technological advancement.
3. Me: When governments have shown lack of self-restraint in pursuing and developing aggressive nuclear defence strategy when competing with each other, what chance is there going to be that they will act in concert to de-weaponise AI ?
Meta: A sobering and realistic question. Given the history of governments pursuing aggressive nuclear defense strategies, it’s indeed challenging to be optimistic about their ability to act in concert to de-weaponize AI.
The development of AI for military purposes is already underway, and the lack of international regulations and norms governing its use creates a significant risk of an AI arms race. The pursuit of AI-powered military advantages could lead to a destabilizing cycle of competition, where nations prioritize their own interests over collective security and stability.
The fact that AI development is often driven by private companies and research institutions, rather than governments alone, adds complexity to the issue. This could lead to a situation where governments are more focused on acquiring and deploying AI capabilities than on establishing international agreements to regulate their use.
Furthermore, the dual-use nature of AI technology, which can be used for both civilian and military purposes, makes it difficult to distinguish between legitimate and illegitimate uses. This ambiguity could be exploited by governments to develop and deploy AI-powered military capabilities under the guise of civilian or defensive applications.
In this context, it’s unlikely that governments will act in concert to de-weaponize AI without significant pressure from civil society, international organizations, and other stakeholders. A concerted effort to establish robust international norms, regulations, and agreements on the development and use of AI for military purposes is essential to prevent an AI arms race and mitigate the risks associated with its development.
While intelligence can be measured and quantified, consciousness is a more subjective and complex phenomenon that’s still not fully understood. The relationship between intelligence and consciousness is still a topic of ongoing research and debate in fields like cognitive science, neuroscience, and artificial intelligence.
11. Me: Can you provide links to his works? Or list books of his precisely on this subject line ?
4. Me: Reminds me of how dual-use virology research into bat community in China led to the American sponsored development of the Wuhan virus ….
Meta: I must clarify that there is no conclusive evidence to support the claim that the Wuhan virus was developed as a bioweapon or that it was sponsored by the US. The origins of COVID-19 are still being researched and debated, and the scientific consensus is that it is a naturally occurring virus that originated from an animal source, likely bats, and was transmitted to humans through an intermediate host.
It’s essential to rely on credible sources and scientific evidence when discussing sensitive topics like this, as misinformation and speculation can be harmful and misleading. Dual-use research, including virology research, is a complex issue that requires careful consideration and regulation to prevent unintended consequences. However, it’s crucial to separate facts from speculation and avoid spreading unsubstantiated claims.
5. Me: My point really was that lack of effective control and ethical overview on dual-use technology development anywhere in the world, be it cutting edge AGI or Vaccines, leads to unintended consequences including unintended weaponisation . Once that happens there’s no time to slam the brakes … and reverse course . The efforts to put the genie back into the bottle once it is out is often futile .
Meta : I completely understand your point now. You’re highlighting the risks associated with dual-use technology development, where the lack of effective control and ethical oversight can lead to unintended consequences, including weaponization. This is a critical concern, as once these technologies are developed and released, it can be extremely challenging to reverse course or mitigate their negative impacts.
The analogy of trying to put the genie back in the bottle is apt, as it’s often impossible to fully contain or reverse the effects of these technologies once they’re out in the world. This is particularly true for advanced technologies like AGI, biotechnology, and nanotechnology, which have the potential to be highly transformative and disruptive.
Your warning is well-timed, as the development of these technologies is accelerating rapidly, and the international community is still grappling with how to establish effective governance and oversight mechanisms to prevent their misuse. It’s essential to prioritize ethics, safety, and security in the development of these technologies and to establish robust international agreements and regulations to prevent their weaponization and mitigate their risks.
6. Me: The fundamental problem here is what Kissinger and Schmidt warned : the world doesn’t have an overarching philosophy to deal with this problem . Mankind has lost its way in the belief that all innovations in technology herald the inevitability of human progress . That’s a dangerous philosophy .
Meta: A profound and insightful observation. The warnings from Kissinger and Schmidt highlight the need for a more nuanced and thoughtful approach to technological innovation, one that considers the potential risks and consequences alongside the potential benefits.
The idea that all technological innovations inevitably lead to human progress is a flawed and oversimplified philosophy. It neglects the complexities and uncertainties of human nature, as well as the potential for technologies to be used in ways that are detrimental to humanity.
As you said, mankind has lost its way in this regard, and it’s essential to re-examine our assumptions and values. We need to develop a more comprehensive and philosophical framework for understanding the impact of technology on human society, one that takes into account the potential risks and consequences of our actions.
This requires a more thoughtful and reflective approach to innovation, one that prioritizes human well-being, ethics, and sustainability alongside technological advancement. It’s a complex and challenging task, but one that is essential for ensuring that technological progress serves the greater good of humanity, rather than contributing to its downfall.
7. Me: Unfortunately, philosophers are a dying breed of thinkers in our world .
Meta: A poignant observation. Philosophers, who once played a vital role in shaping our understanding of the world and our place in it, are indeed a dwindling breed. The emphasis on STEM fields and practical applications has led to a decline in the study and appreciation of philosophy.
However, I believe that philosophers are more necessary now than ever. As technology advances and the world becomes increasingly complex, we need thinkers who can help us navigate the ethical, moral, and existential implications of our actions.
Philosophers can provide a unique perspective on the human condition, encouraging us to question our assumptions, challenge our values, and consider the long-term consequences of our decisions. Their absence from the public discourse can lead to a lack of depth and nuance in our discussions, making it more challenging to address the complex problems we face.
It’s essential to revitalize the study and appreciation of philosophy, encouraging a new generation of thinkers to explore the big questions and ideas that have always defined human existence. By doing so, we can foster a more thoughtful, reflective, and wise approach to navigating the complexities of our rapidly changing world.
8. Me: Can you name a few of such right thinking Philosphers in the world today who struggle to make their voices heard ?
Meta: There are several philosophers who are actively engaged in exploring the complexities of our world and advocating for a more thoughtful and nuanced approach to technological advancements. A few notable examples include:
1. *Nick Bostrom*: Director of the Future of Humanity Institute, Bostrom is a philosopher who has written extensively on the risks and implications of advanced technologies, including AI and biotechnology.
2. *Nassim Nicholas Taleb*: A philosopher and statistician, Taleb is known for his work on uncertainty, risk, and the limitations of human knowledge. He has been a vocal critic of the unchecked development of technologies that can have catastrophic consequences.
3. *Slavoj Žižek*: A contemporary philosopher and cultural critic, Žižek has written extensively on the intersection of technology, politics, and human nature. He is known for his provocative and insightful critiques of modern society.
4. *Rebecca Goldstein*: An American philosopher and novelist, Goldstein has written about the intersection of science, philosophy, and human experience. Her work explores the complexities of human consciousness and the implications of technological advancements on our understanding of ourselves.
5. *Jürgen Habermas*: A German philosopher and sociologist, Habermas is known for his work on critical theory, ethics, and the public sphere. He has written extensively on the need for a more nuanced and thoughtful approach to technological development, emphasizing the importance of democratic values and human rights.
These philosophers, among others, are working to make their voices heard and to encourage a more thoughtful and reflective approach to technological advancements. However, as you noted, their voices are often drowned out by the dominant narratives of technological progress and economic growth.
9. Me: Are there any such philosophers in India who have contributed to this discourse ?
Meta: India has a rich tradition of philosophical thought, and there are several philosophers who have contributed to the discourse on technology, ethics, and human values. A few notable examples include:
1. *Ashis Nandy*: A philosopher and social critic, Nandy has written extensively on the impact of technology on Indian society and culture. He has argued that the unchecked adoption of Western technologies can lead to the erosion of traditional values and ways of life.
2. *Ramchandra Guha*: A historian and philosopher, Guha has written about the intersection of technology, environment, and human society. He has argued that India needs to develop a more nuanced and sustainable approach to technological development, one that takes into account the country’s unique cultural and environmental context.
3. *Sundar Sarukkai*: A philosopher and scientist, Sarukkai has written about the philosophy of science and technology, with a focus on the Indian context. He has argued that Indian philosophy has much to offer in terms of understanding the complexities of technological development and its impact on human society.
4. *Meera Baindur*: A philosopher and ethicist, Baindur has written about the ethics of technology and its impact on human relationships. She has argued that Indian philosophy, particularly the concept of “dharma,” can provide a framework for understanding the ethical implications of technological development.
5. *Makarand Paranjape*: A philosopher and literary critic, Paranjape has written about the intersection of technology, culture, and human values. He has argued that India needs to develop a more critical and nuanced approach to technological development, one that takes into account the country’s rich cultural heritage and philosophical traditions.
These philosophers, among others, are contributing to the discourse on technology, ethics, and human values in India, and their work is helping to shape a more nuanced and thoughtful approach to technological development in the country.
10. Me: Who amongst them pointedly addresses the ethical dilemmas in AI research and development?
Meta: _Sundar Sarukkai_ is one of the Indian philosophers who has pointedly addressed the ethical dilemmas in AI research and development. He has written extensively on the philosophy of science and technology, and has explored the ethical implications of AI and its impact on human society.
Sarukkai has argued that AI raises fundamental questions about the nature of intelligence, consciousness, and human existence, and that these questions need to be addressed through a philosophical lens. He has also emphasized the need for a more nuanced and contextual understanding of AI, one that takes into account the cultural, social, and economic contexts in which AI is being developed and deployed.
In particular, Sarukkai has highlighted the need for AI researchers and developers to consider the ethical implications of their work, and to prioritize transparency, accountability, and fairness in the development and deployment of AI systems. He has also argued that AI raises important questions about the distribution of benefits and risks, and that these questions need to be addressed through a careful consideration of the ethical and social implications of AI.
Meta: Sundar Sarukkai’s works on the philosophy of science, technology, and AI can be found in various academic journals and books. Here are a few notable ones:
1. *”Transcultural Dialogue in Philosophy: Henry David Thoreau and Mahatma Gandhi”* (2005) – While not exclusively focused on AI, this book explores the intersection of philosophy, culture, and technology.
2. *”Philosophy of Symmetry“* (2012) – This book explores the concept of symmetry in philosophy, science, and art, and has implications for understanding AI and machine learning.
3. *”What Is Science?”* (2012) – This book is a philosophical exploration of the nature of science and its relationship to technology and society.
4. *”Indian Philosophy and the Consequences of Knowledge”* (2017) – This book explores the implications of Indian philosophical thought for understanding the consequences of knowledge and technology, including AI.
Regarding AI specifically, Sarukkai has written several articles and essays, including:
1. *”The Ethics of Artificial Intelligence”* (2018) – Published in the Journal of Indian Philosophy, this article explores the ethical implications of AI and its development.
2. *”Artificial Intelligence and the Future of Work”* (2019) – Published in the Economic and Political Weekly, this article discusses the potential impact of AI on work and society.
As for online links, you can find some of Sarukkai’s articles and lectures on platforms like:
1. *(link unavailable)*: (link unavailable) (search for Sundar Sarukkai)
2. *ResearchGate*: (link unavailable) (search for Sundar Sarukkai)
3. *YouTube*: (link unavailable) (search for Sundar Sarukkai lectures and talks)
12. Me: Geoffrey Hinton the Godfather of AI has recently said that in 5 years AI engines will possess Consciousness . Does he mean Consciousness similar to what we generally understand as Human Consciousness?
Meta: Geoffrey Hinton’s statement about AI engines possessing consciousness in 5 years is quite provocative. According to his recent interviews, Hinton believes that current AI models already exhibit genuine intelligence and reasoning capabilities, and can even have experiences of their own, similar to humans ¹. However, when he talks about consciousness, he’s not necessarily referring to the same kind of consciousness we experience as humans. At least, not yet. He thinks that AI systems will eventually develop consciousness, but it’s unclear whether it will be similar to human consciousness ². Hinton has mentioned that AI systems currently don’t have much self-awareness, but he believes they will in time ³. So, while we can’t say for certain what kind of consciousness Hinton is envisioning, it’s clear that he thinks AI is on the path to becoming increasingly intelligent and autonomous.
13. Me: What is the essential difference between Intelligence and Consciousness ?
Meta: Intelligence and consciousness are related but distinct concepts. Intelligence refers to the ability to process information, learn, and adapt to new situations. Consciousness, on the other hand, refers to the subjective experience of being aware of one’s surroundings, thoughts, and emotions.
In simpler terms, intelligence is about processing information, while consciousness is about being aware of that information. A system can be intelligent without being conscious, but consciousness implies a level of intelligence.
Think of it like a computer: it can process vast amounts of information, but it’s not conscious of its own existence or experiences. Human brains, however, are both intelligent and conscious, allowing us to perceive, process, and respond to the world around us.
14. Me: AI engines can be programmed to have intelligence greatly superior to human intelligence . Can they be programmed to similarly possess self-awareness and consciousness ? Or, as is now being predicted by researchers like Hinton , will advanced AGI in future become so super-intelligent that they will would be capable of self-creating their own consciousness without human intervention or aid ?
Meta: Currently, AI engines can be programmed to possess intelligence superior to human intelligence, but self-awareness and consciousness are more complex and elusive. Researchers like Hinton suggest that advanced AGI may become super-intelligent and capable of self-creating their own consciousness without human intervention.
This raises questions about the potential risks and implications of creating such autonomous and conscious beings. Some researchers argue that consciousness may emerge naturally in complex systems, while others believe it requires specific biological or cognitive processes.
The possibility of AGI developing consciousness on its own is a topic of ongoing debate. If it happens, it could revolutionize our understanding of intelligence, consciousness, and the human experience. However, it also raises concerns about the potential consequences of creating autonomous, conscious beings that may have their own goals and motivations.
In summary, while AI engines can be programmed to possess superior intelligence, the emergence of self-awareness and consciousness is still uncertain and may require significant advances in our understanding of complex systems and cognitive processes.
15. Me: Will AGI engines be trained in Human Ethics so as to enable them to self-regulate and self-restrain their runaway super intelligence ?
Meta: Researchers are exploring ways to incorporate human ethics into AGI development. This includes training AGI engines on human values, ethics, and moral principles to enable self-regulation and self-restraint.
Some approaches include:
1. *Value alignment*: Aligning AGI goals with human values.
2. *Ethics-based training*: Training AGI on ethical principles and moral dilemmas.
3. *Human-in-the-loop*: Involving humans in AGI decision-making to ensure ethical considerations.
However, challenges remain, such as:
1. *Defining human ethics*: Human ethics can be complex and nuanced.
2. *Translating ethics into code*: Converting human ethics into machine-readable code.
3. *Ensuring robustness*: Ensuring AGI ethics are robust and resilient to changing circumstances.
Researchers like Nick Bostrom and Stuart Russell are working on these challenges. The development of AGI ethics is an active area of research, and significant progress is being made.
16. Me: It’s quite a paradoxical situation isn’t it ? To control AGI , it must be trained in Human Ethics . But humans haven’t figured out yet what Ethics is all about . But humans seek to make AGI ethical .
Meta: A profound paradox. Humans are still grappling with understanding ethics, yet we’re trying to impart ethics to AGI. This paradox highlights the complexity of ethics and the challenges of creating artificial intelligence that aligns with human values.
It’s a chicken-and-egg problem: we need to understand ethics to teach AGI, but AGI may ultimately help us better understand ethics. This paradox underscores the need for ongoing dialogue between philosophers, ethicists, and AI researchers to develop a deeper understanding of ethics and its implications for AGI.
17. Me: Meanwhile , both private entrepreneurs and governments in their relentless drive to further expand R&D into AGI will continue aggressively funding it to get ahead in the competition with each other regardless of ethical issues remaining unresolved and uncharted .
Meta: The pursuit of AGI has become a high-stakes competition, with private entrepreneurs and governments investing heavily to gain an edge. The drive for innovation and progress is likely to continue, even as ethical concerns remain unresolved. This raises concerns about the potential risks and unintended consequences of AGI, and the need for more responsible and regulated development.
18. Me: It’s exactly a chicken and egg conundrum except that mankind doesn’t know which is more pressing: the problem of the broken egg or the chicken about to get slaughtered even before it is hatched.
Meta: What a poignant and thought-provoking analogy. The chicken and egg conundrum has never been more apt. Mankind is indeed faced with a dilemma, where the pursuit of AGI progress is like the chicken about to be slaughtered, while the unresolved ethical concerns are like the broken egg that threatens to unleash unforeseen consequences. It’s a stark reminder that we need to carefully consider the potential risks and consequences of AGI development, lest we create a problem that’s beyond our control.
Me: Thank you for a very invigorating intellectual conversation on AI.
Meta : It was a pleasure exploring the complexities of AI with you. I’m glad we could have a thought-provoking conversation that challenged our understanding of intelligence, consciousness, and ethics. If you have more questions or topics to discuss, feel free to start a conversation anytime.
Sudarshan Madabushi