The Fear of AI: History Repeating Itself, or… ?

Sep 6, 2024 | blog

by Maurice E. Bakker BSc MBA, partner

Humans have a long track record of initially resisting disruptive technologies, this phenomenon is almost as old as humanity itself, before eventually embracing them. It is natural to us humans that each breakthrough that shakes up the status quo, sparks worry in its early days. Let me start with some historical parallels, some examples of how new tech has rattled human society through the ages.

  1. Ancient Greece: Even Socrates cautioned that the new technology of writing would “create forgetfulness” in learners’ minds (since people might rely on written words rather than memory) .
  2. 1560s Printing Press: Swiss scholar Conrad Gessner fretted that an explosion of printed information was “confusing and harmful” to the mind – an early alarm about information overload .
  3. 1810s Industrial Revolution: The original Luddites – English textile artisans – literally smashed automated looms, convinced these machines would deskill them and destroy their livelihoods .
  4. 1960s Early AI & Automation: As computers and artificial intelligence first emerged, many people warned of an impending “automation crisis,” even invoking fears of the “domination of man by the Machine” .

So yes, AI anxiety isn’t entirely new. There is common thread in human behavior, a clear recurring pattern. Transformative tech often triggers anxiety about lost skills, jobs, or control. Every generation fears the latest innovation might spell disaster – whether for our minds, our culture, or our careers. As one commentator wryly noted, there’s “always a new technology” thought to “ruin the minds of [our] children or steal all of our jobs.” Yet time and again, society adapts. The printing press ended up enlightening more than it harmed. The Luddites’ worst job-loss fears didn’t fully materialize. Instead, new industries and roles eventually formed. Initial resistance sometimes even highlights valid concerns (safety, ethics, equity) that push us to shape technology for the better. In hindsight, those early fears often appear excessive, but they remind us that public caution can guide responsible innovation.

Fast-forward to the present and the pattern continues with artificial intelligence. AI is sweeping through workplaces and daily life, provoking excitement and alarm. Many professionals worry about everything from automation-driven unemployment to AI bias and privacy risks. In fact, recent surveys show over half of Americans are more concerned than excited about AI’s growing role in daily life . We’re hearing familiar questions: Will AI tools replace human jobs? Are we losing the “human touch”?  Just as before, society is grappling with the trade-off between embracing innovation and holding onto what we value.

History teaches that initial resistance is a feature, not a bug, of how we adopt new technology. Fear and skepticism accompany each tech revolution – and can be productive if they spark dialogue and demand accountability. Ultimately, most technologies we once dreaded (from electricity to the internet) became integral after we addressed their pitfalls. The lesson for AI? Rather than dismissing critics as mere Luddites, we should address their concerns while moving forward. Change is inevitable, but how we manage it is up to us.
Will today’s AI fears fade like those of past innovations, or is this time truly different? By learning from history’s tech transitions, we can strive to ensure AI becomes a tool that augments humanity rather than alienates it.

So here is the long read:
Humans have a long-standing tendency to meet new technologies with skepticism or fear. Psychologically, a key driver is fear of the unknown – our brains exhibit a negativity bias that makes us focus on potential dangers of a new tool more than its benefits . This bias is thought to be a survival mechanism from our evolutionary past, when reacting cautiously to unfamiliar things could protect us from harm. Even in modern settings, people often prefer the familiar and predictable, finding comfort in routines; sudden changes (like an introduced technology) can feel threatening and unsettling . In short, when faced with something radically new, our default response is often protective wariness rather than open-armed embrace.
Evolutionary psychology suggests this wariness had survival value. Early humans who were too eager to try unknown things might have taken lethal risks, whereas those who were cautious lived to pass on their genes. “Humans are instinctively designed to react to novel things in a way that aims to protect oneself,” notes innovation scholar Calestous Juma . In practice, this means people tend to imagine what could go wrong—loss of safety, status, or livelihood—when confronted with disruptive tech. Fear of loss is a powerful motivator; individuals worry a new invention might take away their jobs, skills, or social order . This is closely tied to loss aversion (we weigh potential losses more heavily than gains) and a general resistance to change known as status quo bias. As a result, the immediate reaction to novel technology often skews negative or anxious until proven otherwise.
Other psychological factors include lack of understanding (complex technologies like artificial intelligence can seem like “black boxes,” breeding mistrust) and perceived control. If people feel they can’t control or comprehend a technology, they’re more likely to resist it. In fact, technophobia (an irrational fear of technology) can manifest when devices or algorithms seem too complex to grasp . Emotions like anxiety and uncertainty often emerge alongside new tools , underscoring that resistance isn’t purely rational. It’s not that people hate innovation per se; rather, they fear what negative consequences it might bring. As one writer observed, “The overarching fear is that any popular technology will fundamentally alter a previously satisfying, well-balanced existence and destroy the society we’ve worked so hard to build.” In essence, we project our worries onto tech: Will this disrupt my life? Erode my skills? Undermine my values or privacy? Such questions can trigger an instinctive defensive stance toward cutting-edge tech, including AI.
New technologies provoking fear and backlash is not a recent phenomenon. History is replete with examples of innovations that were first met with panic or fierce resistance before eventually becoming accepted. Some notable cases will follow.
Writing and the printing press. Yes, seriousely. Concerns about new information technology go back as far as ancient Greece. The philosopher Socrates warned that the invention of writing would “create forgetfulness in the learners’ souls” and give people only “the semblance of truth” instead of real wisdom . Centuries later, when Johannes Gutenberg’s printing press (15th century) enabled mass-produced books, it upended the world of hand-copied manuscripts. Scribes’ guilds across Europe reacted with alarm — they even destroyed printing presses and chased early printers like Johann Fust out of town, accusing them of witchcraft . Both religious and secular authorities fretted that easy access to printed material would spread errors or subversive ideas, undermining their power . Indeed, one monk in 1492 (Johannes Trithemius) insisted that “Printed books will never be the equivalent of handwritten codices,” disparaging the new medium’s quality . Today, it’s almost hard to imagine printed books as dangerous — in hindsight the printing press is seen as a cornerstone of progress, vastly expanding literacy and knowledge. The fears of scribes losing their livelihoods or society collapsing under misinformation did not come to pass; instead, scribes adapted (eventually becoming printers themselves) and society reaped enormous benefits. As one commentator wryly noted about those times, “it’s hard to view the printing press as a disastrous blot on history” now .

 

A modern newspaper printing press. When the printing press was new in the 15th century, many craftsmen feared it. Scribes’ guilds in Europe smashed early presses out of fear that mass-produced books would destroy their livelihood and spread dangerous ideas . Eventually, printed media became an accepted norm, illustrating how initial technological fears can fade over time.

The industrial revolution and the development of machinery is another fine human example. Fast-forward to the 18th and 19th centuries, and we see another wave of technophobia with mechanization. As textile mills and factory machines emerged in England, skilled artisans grew alarmed that machines run by unskilled labor would render their trades obsolete . This culminated in the famous Luddite movement (circa 1811–1813), where groups of weavers and other workers literally took hammers to machinery. In 1675, English weavers had already destroyed new mechanical looms in riots, and by 1727 machine-breaking became so common that the British Parliament made it a capital crime . Nonetheless, the discontent peaked decades later with bands of Luddites smashing knitting frames and other devices they saw as threats to their jobs . The Luddites weren’t merely “anti-technology” fools, but workers reacting to very real economic upheaval. They feared poverty and loss of their craft. Over time, however, industrialization marched on. While it did eliminate certain jobs, it also eventually created new industries and roles (from factory managers to mechanical engineers). The pattern repeated: fierce initial resistance, sometimes even violence, followed by begrudging adaptation. Society ultimately absorbed the new tech, though not without economic and social pain in the transition.


Then there is a smaller collection of other incidents that I like to call “Techno-panics”, and no, I do not mean the music that formed in the 1990s. Almost every major invention has its share of early detractors. In the 19th century, the telegraph and telephone were feared by some to disrupt social order or even thought to invoke supernatural dangers (one early claim was that the telephone could summon evil spirits or lightning strikes) . Early radio faced criticism that it would fill minds with nonsense and erode morality. The arrival of television in the mid-20th century sparked worry that kids would stop reading and families would stop talking. Even the personal computer and internet were viewed with suspicion by some in their early days as being too complicated, or likely to isolate people. In each case, a now-familiar theme played out: initial alarmist rhetoric (often predicting the “death” of something – memory, jobs, social life, etc.), followed by gradual acceptance as the technology proved its utility or became impossible to ignore. “We forget that technologies are actually reflections of our values and desires, and they co-create one another,” observes researcher Coye Cheshire, noting that fears about memory loss with the printing press resurfaced almost identically with the Internet centuries later . The consistency of these techno-panics through time is remarkable. As one writer put it, “what’s old is new again” – the same fears simply get transplanted onto the latest technology.


In light of these historical parallels, AI is simply the newest “new” technology to provoke both excitement and dread. People in the 1500s fretted about an overload of books much as people today fret about information overload or fake news on the internet. Workers in 1812 swung hammers at machines, much as some modern workers might wish to “pull the plug” on certain forms of automation threatening their jobs. Yet history shows that, overwhelmingly, society adapts. As the author of Innovation and Its Enemies notes, every innovation brings “tension between the need for innovation and the pressure to maintain continuity and stability.” Finding balance is the challenge, but eventually that tension does get reconciled – often with new norms, jobs, and cultural understandings that incorporate the once-feared technology.


Artificial Intelligence is a transformative technology that has drawn mixed reactions: some see it as a revolutionary opportunity, others as a looming threat. These perceptions vary significantly across different cultures and industries.
How AI is perceived often depends on cultural narratives and values. In Western popular culture, AI and robots are frequently portrayed as dangerous or uncontrollable (think of apocalyptic films like The Terminator or The Matrix). This reflects a deeper wariness in the West, possibly rooted in philosophical and religious outlooks. Western Judeo-Christian traditions position humans as unique, and there’s an ingrained fear of “playing God” or creating something that could overpower its creator . As a result, Western audiences have tended to imagine AI in dystopian terms (rogue AIs, robot rebellions) and often approach the technology with caution. By contrast, countries like Japan have a long-standing cultural affinity for robots and AI. Influenced by Shinto beliefs (which imbue spirits in objects) and a history of friendly robot characters (e.g. Astro Boy), Japanese society broadly views robots as helpful companions rather than sinister rivals . “We [Japanese] have no fear of our new robot overlords – we’re kind of looking forward to them,” writes one Japanese commentator, noting that Japanese folklore and religion never positioned humans as uniquely superior to other beings . This illustrates how religion and mythos shape attitudes: Western cultures, with a narrative of potential “overlords” and past experiences of slavery, may subconsciously fear being subjugated by AI , whereas Japanese culture, emphasizing harmony and coexistence, is more open to seeing AI as a partner.

Global surveys confirm these East-West differences. In a 2023 Ipsos poll spanning 30+ countries, excitement about AI’s possibilities was highest in emerging Asian and Middle Eastern markets, and lowest in Europe and North America . Conversely, nervousness about AI was most prevalent in predominantly English-speaking countries (e.g. US, UK, Canada), and much lower in places like Japan and South Korea . For example, majorities in India and China tend to view AI as an engine of economic growth and improved quality of life, whereas publics in France or Germany often express skepticism, emphasizing risks like job loss or erosion of privacy. Trust levels also vary: only about 32% of people in France, Japan, and the U.S. say they trust companies to use AI in a way that protects personal data, versus over 70% in places like Thailand . Such contrasts may be due to differing media narratives, education about AI, and recent experiences. Countries that have rapidly benefited from tech development (and may have fewer legacy systems or regulations) often welcome AI optimism. Meanwhile, nations that place a high value on personal data privacy or that have strong labor protections might be more wary of AI’s disruptive power.
It’s important to note that attitudes are not monolithic within a culture either – they can differ by generation and other factors. Younger people worldwide are generally more positive about AI. Pew Research found that younger adults are more inclined to try new tech and have higher interest in science/technology topics . This “digital native” cohort sees AI as a normal extension of the tech they grew up with, rather than an alien invasion. In contrast, older generations who did not come of age with AI may feel less comfortable and focus on what’s being lost. Education and tech literacy also play a role: familiarity breeds comfort. Someone who understands how AI works (or uses it regularly, say in smartphone apps or voice assistants) is less likely to view it mystically or malignly. In cultures or subcultures with higher tech education, AI is more often seen as a tool than a threat.


Just as cultures differ in their view of AI, so do industries and professions. AI’s impact is highly context-specific: it can be an existential threat to one field and a game-changing boon to another. This leads to divergent perspectives even within the overall economy.


Consider the job market: AI and automation undeniably threaten certain jobs through displacement of human labor. Industries with routine, repetitive tasks have already seen AI-driven automation making human roles redundant – for instance, manufacturing assembly lines increasingly use robots, and clerical tasks can be handled by AI software. A widely cited analysis by Goldman Sachs in 2023 estimated that up to 300 million jobs worldwide could be disrupted by AI and automation in the coming years . Understandably, workers in roles from truck driving (facing self-driving vehicles) to customer service (facing chatbots) feel a sense of alarm. Creative professionals like illustrators, writers, and musicians have also raised concerns as generative AI can produce art, text, and music; many artists already report clients asking for AI-generated work to cut costs, putting pressure on their livelihoods . These groups often perceive AI primarily as a threat – to their employment, to the quality of their craft, or even to human uniqueness in creative endeavors.


On the other hand, many industries see AI as an unprecedented opportunity. Sectors such as healthcare anticipate huge benefits from AI in diagnosing diseases, personalizing treatments, and crunching medical data at superhuman speed. In finance, AI algorithms can detect fraud or execute trades faster and more accurately than humans. In education, AI tutors and personalized learning systems promise to enhance (not replace) teachers by tailoring material to each student. Industrial and agricultural companies use AI for predictive maintenance of equipment and optimizing supply chains, improving efficiency and safety. For these use cases, AI is largely seen as a tool that augments human capability. Even in fields where jobs might be lost, business leaders tend to emphasize that AI will create new roles and free people from drudgery to do more valuable, creative work. Historically, this has some truth: past innovations often eliminated certain occupations but gave rise to entirely new ones. For example, the mechanization of agriculture reduced farm labor dramatically, but created jobs in tractor manufacturing, food processing, and so on. Calestous Juma pointed out that new technology “typically generates as many jobs as it replaces”, often in the form of new industries and services . The World Economic Forum likewise forecast a net positive job impact from AI in an earlier report, predicting over 130 million new roles globally by 2022 even as 75 million jobs were displaced – a net gain as the economy adapts .


That said, the distribution of those gains and losses matters greatly. In the short term, specific communities or sectors can be hard-hit by automation (for instance, factory workers laid off due to AI-driven robots) while gains (like AI engineering jobs) accrue elsewhere. This uneven effect influences perceptions: tech executives and investors, who see big productivity boosts and cost savings, are typically enthusiastic about AI. In fact, many companies are racing to adopt AI to stay competitive, viewing it as an opportunity they can’t afford to miss. A recent survey of Fortune 500 firms found 56% explicitly identified AI as a “risk factor” – not because they fear AI itself, but because failing to leverage AI could threaten their business’s future . By contrast, front-line employees may feel more apprehensive, worrying that management will use AI to cut jobs or surveil workers. Labor unions and professional associations in some industries have started pushing for guidelines on AI use (for example, the Writers Guild in Hollywood negotiated limits on AI-generated scripts, reflecting writers’ fears that AI could usurp their creative roles).


In different industries, therefore, AI can wear the mask of a boogeyman or a benefactor. Tech sector professionals largely see it as an opportunity – AI is the engine driving new products and efficiencies. Manufacturing and logistics workers might see it as a threat to job security, even as their companies extol the productivity gains. Healthcare providers have a mix of excitement (for better patient outcomes) and caution (ensuring AI errors don’t harm patients, and that the “human touch” remains). Education stakeholders debate AI: is it a threat (students cheating with AI, teachers’ roles diminished) or an aid (personalized tutoring and freeing teachers from grading grunt work)? Often, the verdict is both – it’s an opportunity if harnessed well, and a threat if mishandled.

Crucially, perceptions can shift once people see AI in action. A teacher who initially feared AI might change her mind if a new AI tool helps her struggling students catch up. A factory worker might come to appreciate a robot assistant that takes on the most dangerous tasks. In industries, this exposure effect can gradually convert skeptics to pragmatists. Nonetheless, if AI deployment is done without regard to employees’ well-being or without explaining the benefits, it can entrench resistance. Industries that proactively retrain workers for new AI-augmented roles and include them in the transition tend to foster more acceptance, viewing AI as a collaborative tool rather than a faceless job killer.


The split in attitudes toward AI is also reflected among experts and public figures. Some prominent voices issue dire warnings, while others tout AI’s promise and urge a level-headed approach. Renowned physicist Stephen Hawking cautioned that advanced AI could become “either the best, or the worst thing, ever to happen to humanity” . He warned that if we fail to prepare for AI’s risks, it could even mean the “end of the human race” in a worst-case scenario – for example, super-intelligent AI might become uncontrollable or use resources in ways that threaten humanity . Entrepreneur Elon Musk has similarly called AI an existential threat, comparing it to “summoning the demon” and suggesting AI could pose a greater danger than nuclear weapons if left unregulated. These warnings tap into the cultural narrative of AI as a potential destroyer if it surpasses human intelligence and objectives. They have contributed to a perception of AI as something we must be very cautious and fearful about, lest we unleash a monster.


On the other side, many AI researchers and tech leaders argue that these apocalyptic fears are overblown – at least with our current AI. Andrew Ng, a leading AI scientist, famously remarked: “Worrying about evil killer robots is like worrying about overpopulation on Mars.” In his view, we haven’t even landed on Mars (so to speak) with AI, meaning true artificial general intelligence is still far off, and we should focus on the tangible issues in front of us rather than distant sci-fi scenarios . Ng and others emphasize AI’s immediate benefits and advocate for pragmatic problem-solving (e.g., addressing algorithmic bias or job transitions) rather than fear-mongering. Similarly, experts like Oren Etzioni and Andrew Moore have suggested that AI is a tool that will amplify human capabilities and that with proper ethical guidelines, the nightmare scenarios can be averted. This camp views AI as a transformative opportunity – often dubbing it the “new electricity” for its potential to power countless innovations – and believes humanity can adapt just as we have with past revolutionary technologies.


Notably, even the doomsayers don’t call for abandoning AI research altogether; rather, they call for respectful caution and oversight. And the optimists don’t deny that AI can be disruptive; they just contend that we can manage the disruption. Over time, as AI systems become more commonplace, some of the extreme views may temper. We’ve already seen a shift from the peak of the hype cycle – a few years ago, popular discourse oscillated between “AI will solve everything!” and “AI will kill us all!” – to a more nuanced discussion today about regulation, ethical AI, and realistic use cases. In public opinion, surveys indicate people simultaneously value and fear AI: for example, a majority might agree AI will improve their lives and also express worry about its long-term impacts . This ambivalence is natural for a powerful, double-edged technology. As we collectively gain more experience with AI (in medicine, cars, smartphones, etc.), our understanding of its real risks and rewards sharpens, moving beyond speculative extremes.


Perspectives on AI (and new technology in general) are not fixed; they evolve over time. Typically, the pattern follows an S-curve of adoption: initially only a small group of enthusiasts embrace the tech, while the majority are skeptical or fearful. As the technology proves its worth or becomes more user-friendly, more people adopt it, and eventually it becomes mainstream and even mundane. During this evolution, several factors contribute to shifting attitudes from rejection to acceptance:
Nothing overcomes skepticism quite like seeing a technology tangibly improve one’s life or work. When people witness AI accurately detecting cancer early, or saving them time by automating tedious tasks, it builds a case for the technology. Perceived usefulness is a core factor in technology acceptance . In the classic Technology Acceptance Model (TAM), if users believe a new tech will help them accomplish goals better, their attitude toward it becomes more positive. Over time, the cumulative evidence of AI’s benefits (e.g. businesses increasing productivity, individuals getting new capabilities) can win over former doubters. Early success stories and pilot programs are therefore crucial in changing minds.


Familiarity breeds comfort. The more people interact with a new tech, the less intimidating it becomes. Familiarity reduces fear, making people less likely to outright reject an innovation. This has been observed with everything from automobiles to smartphones. Initially, self-driving cars, for instance, spooked people; but those who have ridden in one a few times often report feeling more at ease. In workplace settings, providing hands-on training and gradual exposure to AI tools can demystify them. Studies find that trust and training go hand-in-hand with acceptance: when users are educated about how an AI system works and get to try it in a low-stakes way, their trust increases and anxiety decreases . Essentially, the unknown becomes known. As one organizational psychologist put it, people need to “feel heard” and have their fears acknowledged during the introduction of new tech ; once that happens and they gain personal experience, the emotional resistance often gives way to practical evaluation. It’s telling that younger generations, who grow up with AI around, tend to view it as normal – their familiarity started early, making them inherently more open to it.


A technology that is user-friendly and integrates smoothly into existing routines faces less pushback. If AI tools are too complex, difficult to use, or require huge behavior changes, people will resist longer. However, if developers focus on intuitive design (for example, a voice assistant that feels as natural as talking to a person), adoption speeds up. Again referencing TAM, perceived ease of use directly influences attitude . Early personal computers required command-line inputs and were confined to hobbyists; the invention of the graphical user interface greatly broadened acceptance. Similarly, as AI is embedded in everyday apps and devices in seamless ways, people may use it without a second thought (often not even realizing that “AI” is at work). This gradual, almost invisible integration can melt resistance – consider how many people were wary of “algorithms,” yet millions now happily let algorithms recommend music, navigate them through traffic, or filter spam email.


People take cues from others in their community. As more friends, colleagues, or competitors adopt a technology, the social pressure to also adopt increases. In the early phase, trusted experts or leaders play a key role in swaying opinion. If a well-respected figure in an industry endorses an AI tool (or alternatively, demonstrates its safe use), others are more likely to give it a chance. High-status individuals can thus help normalize new tech. Over time, a broader cultural shift can occur – the narrative changes from “this is scary and will ruin things” to “this is the way of the future, we have to adapt.” We’ve seen language reflect this shift: phrases like “Luddite” have become pejorative, and there is an expectation, especially in business, that one should not fall behind technologically. As AI becomes woven into everyday products and conversations, a new generation of digital culture arises that views AI as an opportunity to be leveraged, not something to fight against. In Japan, for example, decades of positive storytelling around robots have created a culture where robot helpers are welcomed in homes and hospitals. Elsewhere, as AI successes mount, media stories have begun to balance doom scenarios with pragmatic optimism, gradually influencing public sentiment toward cautious acceptance rather than outright fear.

And finally, a crucial factor in acceptance is how we address the legitimate risks of new technology. Public resistance often persists when people feel dangers are ignored. By acknowledging concerns (privacy, bias, safety) and taking visible steps to mitigate them (through ethical guidelines, laws, and transparent practices), leaders can build trust. For AI, this might include implementing data protection rules, bias audits for algorithms, bans on the most hazardous applications (like autonomous weapons), or safety certifications. When people see that there are “rules of the road” for a new tech, they feel more secure that it won’t be a wild menace. Successful integration of past tech – from seatbelts in cars to content ratings on television – often involved regulatory measures that addressed public worries, thereby smoothing the path to broader acceptance. In the case of AI, ongoing global discussions about AI ethics and governance may similarly reassure the public that this powerful tool is under oversight. If AI systems are explainable and users can understand or challenge their decisions, that also enhances trust and uptake.


Over time, these factors combine to shift the Overton window of acceptability. What was once seen as radical or frightening (e.g. doctors trusting an AI’s diagnosis) can become standard practice a decade later. Perspectives on AI are already evolving. A few years ago, many businesses were hesitant to even experiment with AI; now AI-powered solutions are commonplace in enterprise software. Public opinion too shows fluidity – for instance, a significant number of people who initially said they distrusted AI have begun using AI-powered services like virtual assistants, often because they found the convenience outweighed initial qualms. As one writer pointed out after reviewing centuries of techno-skepticism, “There will always be a future, and before long, it will just be the new normal.” The shock of the new fades as it becomes the normal of the now.
You must be as tired of reading as I am of writing by now, so let me conclude. Human resistance to new technology, including artificial intelligence, is rooted in deep psychological instincts and reinforced by millennia of cautionary tales. From an evolutionary lens, our wariness guarded us from danger; from a historical lens, it’s a recurring theme whenever disruptive innovations emerge. AI is triggering those same age-old fears – of losing control, of moral decline, of economic displacement – that greeted the printing press, mechanization, and other breakthroughs in their time. Yet history also teaches that initial fears often give way to acceptance once the value of a technology is realized and societies adapt their norms accordingly. Whether AI is seen as a threat or an opportunity today depends on who you ask and where: different cultures frame the narrative in unique ways, and different industries feel distinct impacts. But perspectives on AI are not static. They are continually shaped by experience, education, and engagement. As AI continues to evolve and integrate into our lives, it’s likely to become less of an enigma and more of a familiar tool – albeit one we must intentionally guide.


Crucially, acceptance of AI doesn’t mean blind embrace; it means moving from fear toward a balanced appraisal of both benefits and risks. The path to that acceptance is smoothed by involving people in the conversation (addressing fears, providing training, ensuring ethical use) and by highlighting positive outcomes. With time, what once seemed like a menacing innovation can become “just another technology” we coexist with – much as printed books, factory machines, and computers eventually did. As Juma reminds us, every reaction to new technology is valid ; understanding those reactions is the first step to addressing them. By learning from psychology and history, we can better navigate the public’s response to AI, guiding it from reflexive resistance toward informed, thoughtful adoption. In the end, AI – like technologies before it – will be what we collectively make of it: a threat we fail to manage, or an opportunity we cautiously but confidently seize. The human story has always been about adapting to change, and our relationship with artificial intelligence will be no different.
So my personal take? Let’s embrace new technologies and let them help us advance our standard of life, increase our longevity here on this planet, and more importantly: let’s use technology to address and counter the Sustainable Development Goals. God knows we need it.