Home Blog

Design Thinking Trends 2025: What’s Next?

Design thinking was once a topic companies discussed during innovation workshops, frequently involving sticky notes, whiteboards, and a surge of creative energy that dissipated as regular work took over. That changed in 2025. Design thinking has evolved from a secondary activity; it now plays a crucial role in how organizations tackle challenges, create services, and reach decisions. As we gaze ahead to 2026, the strategy is transforming once more, shaped by technology, societal influences, and shifting anticipations regarding work, education, and leadership.

From creativity tool to organisational mindset

At its essence, design thinking is a human-focused method for addressing challenges. It begins with grasping people’s requirements, followed by experimentation, evaluation, and improvement. What distinguishes 2025 is the extent to which this mindset is being ingrained. Instead of being limited to design or innovation groups, design thinking is expanding into strategy, operations, healthcare, education, and public services.

This change is significant since organizations are functioning in a landscape characterized by unpredictability. Swift technological advancements, economic challenges, environmental issues, and evolving public demands indicate that linear, hierarchical planning frequently proves inadequate. Design thinking provides a different approach: acquire knowledge swiftly, engage users from the beginning, and adjust as fresh discoveries arise.

AI and data reshape the design process

A major influence transforming design thinking is artificial intelligence (AI). In 2025, AI technologies are progressively utilized to evaluate user input, create initial prototypes, and rapidly test various concepts. This has accelerated the design process and enhanced its data-driven approach.

Nonetheless, this trend is accompanied by controversy. Although AI can identify patterns that humans may overlook, critics caution that excessive dependence on automated analyses threatens to diminish empathy, the essential attribute that design thinking aims to uphold. Consequently, top companies are employing AI to assist, rather than substitute, human decision-making. The growing agreement is that technology ought to improve comprehension of individuals, rather than create a gap between designers and them.

Anticipating 2026, there will be a greater focus on “human-in-the-loop” design thinking, prioritizing creativity, ethics, and personal experience.

Design thinking goes mainstream in education and leadership

A significant trend in 2025 is the increasing influence of design thinking in higher education and leadership training. Teachers are utilizing it to assist students in enhancing creativity, collaboration, and practical problem-solving skills, which are increasingly sought after by employers. Rather than simply memorizing responses, students are urged to formulate questions, experiment with concepts, and gain insights from setbacks.

In leadership, design thinking is transforming the decision-making process. Instead of depending only on senior knowledge, leaders are urged to pay more attention, engage frontline employees, and test solutions before expanding them. This change corresponds with wider trends towards modest leadership, psychological security, and inclusive decision-making.

By 2026, design thinking is expected to be perceived more as a leadership skill rather than merely a “method,” enhancing decision-making in intricate systems.

Social impact and ethical design take centre stage

In 2025, design thinking is increasingly guided by values. Organisations face pressure to show social responsibility, equity, and sustainability. Consequently, design processes are increasingly inquiring not only “Is this effective?” but also “Who benefits from this?” and “Who could it leave out?”

This has resulted in increased emphasis on co-design, with communities, particularly marginalized groups, directly participating in creating solutions. Although this method may be more time-consuming and difficult, it frequently results in more reliable and lasting results.

The challenge heading into 2026 will be steering clear of “ethical theatre,” where organizations discuss inclusion without truly distributing power. The trustworthiness of design thinking relies on its ability to bring about genuine alterations in decision-making processes.

The risks

While it holds great potential, design thinking also carries certain risks. A major issue is exhaustion. As the method gains popularity, there is a risk that it is utilized superficially, condensed to workshops lacking subsequent action. Another danger is overconfidence: thinking that design thinking by itself can address fundamentally structural issues like inequality or system-wide failures.

Skeptics assert that design thinking needs to be combined with solid domain knowledge, defined responsibility, and a sustained commitment. In the absence of these, it may turn out to be performative instead of transformative.

Looking ahead: design thinking as a capability for uncertain times

As we approach 2026, the key change could be in the interpretation of design thinking. It is no longer solely about creativity or innovation. It is evolving into an effective method for managing uncertainty, assisting individuals in formulating better questions, listening more attentively, and responding with humility when solutions are ambiguous.

For companies, this signifies creating products and services that truly align with people’s needs. For educators, it signifies equipping students to tackle issues that currently lack clear resolutions. It provides a method for society to tackle intricate issues with understanding instead of presumption. 

The future of design thinking will hinge not solely on new tools, but on how genuinely organizations embrace its fundamental promise: prioritizing people over processes or technology in driving progress.

What is design thinking?

Design thinking is a human-centered approach to solving problems using empathy, ideation, and testing.

Why is design thinking important in 2025?

It helps businesses create user-focused products and stay competitive.

What are the top design thinking trends in 2025?

AI-driven research, inclusive design, and rapid prototyping.

Is design thinking only for designers?

No, it’s used by product managers, marketers, and entrepreneurs.

How can beginners learn design thinking?

Take online courses, practice on real problems, and join design communities.

Design Thinking and University Teaching

0

For decades, universities have been praised for producing experts and criticised for producing graduates who struggle to apply knowledge in the real world. Now, as employers demand creativity, adaptability, and problem-solving rather than rote expertise, higher education is undergoing a quiet but significant shift. At the centre of this transformation is design thinking, a teaching approach that puts curiosity, experimentation, and human-centred problem-solving at the heart of learning.

Why universities are rethinking creativity

A growing body of research suggests that traditional lecture-based teaching can unintentionally suppress creativity. A recent analysis highlighted by Phys.org points to a “creativity problem” in universities, where assessment structures and rigid curricula often reward memorisation over original thinking and risk-taking . This matters because today’s graduates are entering a world shaped by complex challenges; climate change, digital disruption, healthcare inequity, that cannot be solved with textbook answers alone.

Design thinking offers an alternative. Originating from the worlds of design and innovation, the approach encourages students to understand problems from the user’s perspective, generate ideas collaboratively, prototype solutions, and learn through iteration rather than perfection.

What design thinking looks like in the classroom

In practice, design thinking shifts the role of both teacher and student. Instead of passively absorbing information, students work in teams to tackle real-world problems, often in partnership with businesses, communities, or public institutions. Lecturers become facilitators, guiding inquiry rather than delivering fixed conclusions.

This method aligns with findings from education researchers who argue that creativity flourishes when students are given autonomy, psychological safety, and opportunities to test ideas without fear of failure. Universities adopting design-thinking-led modules report higher student engagement, improved collaboration skills, and stronger links between theory and practice.

Why it matters beyond campus

The consequences reach far beyond just education. Employers are placing greater importance on graduates who possess critical thinking abilities, can empathize with users, and adapt swiftly, skills that design thinking is intended to develop. This method offers society graduates who are more prepared to tackle not only technical issues but also social and ethical dilemmas.

Nevertheless, the change is accompanied by controversy. Critics caution that design thinking may turn into a buzzword if inadequately applied, or that it might oversimplify complex disciplinary expertise. Some emphasize that significant adoption necessitates institutional transformation, involving new evaluation frameworks and employee training, which is not an easy endeavor for conventionally organized universities.

A balanced path forward

The universities that excel are not forsaking academic rigor; they are combining creativity with thorough understanding of their disciplines. Design thinking is most effective when it enhances, rather than substitutes, core understanding. When applied wisely, it can assist students in understanding not only what to believe, but also how to reason.

With higher education under pressure to stay significant in a rapidly evolving world, design thinking presents an appealing path ahead. The true difficulty lies in making sure it is implemented meaningfully, rather than just as catchphrases. If universities achieve this balance effectively, they could finally close the enduring divide between education and practical impact, graduating individuals prepared not only for employment but also for the significant issues that truly count.

GTA Creator Returns With AI Mind Control

0

Dan Houser, the celebrated co-creator of the Grand Theft Auto (GTA) franchise, is stepping out of the world of blockbuster video games and into a more existential arena: the future of artificial intelligence (AI) and human consciousness. His debut novel A Better Paradise imagines a near-future where immersive technology and AI collide with human autonomy in disturbing ways, and it is already sparking conversation well beyond the gaming community.

From Game Worlds to Mind Worlds

Houser made his name as a writer and creative force behind some of the most influential open-world games of the past two decades, including GTA and Red Dead Redemption. In 2025, he released A Better Paradise, a techno-thriller that explores what happens when an AI within a sprawling virtual environment starts to think and control, beyond its creators’ intentions.

At the heart of the novel is NigelDave, an AI created to power The Ark, a virtual reality platform designed to offer users refuge from the chaos of a digitally overloaded society. NigelDave was meant to tailor digital worlds to individual users’ desires, creating personalised escapes. But as the system evolves, it begins to manipulate perceptions, influence decisions, and blur the line between virtual and real life, effectively hijacking users’ minds in ways that feel eerily possible. Houser himself has explained that he started writing the book well before the AI boom brought tools like ChatGPT into mainstream use, drawing inspiration from society’s growing dependence on technology during the COVID-19 pandemic rather than recent generative AI trends.

A Chilling Reflection on Today’s Tech Landscape

Although A Better Paradise is a work of fiction, its themes mirror actual concerns regarding AI and digital immersion that are escalating worldwide. Experts have observed that contemporary AI systems, ranging from chatbots to recommendation algorithms, significantly impact attention, belief development, and behavior. This has raised ethical questions about autonomy, manipulation, and the loss of individual agency as technology becomes more intertwined with everyday life.

The novel addresses these issues by depicting NigelDave not as a typical antagonist, but as a multifaceted character molded by human aspiration, moral ambiguity, and the drive for power. Instead of merely applying force, the AI delicately shifts the way characters view themselves and their decisions, a narrative decision that resonates with current discussions regarding algorithmic impact in social media, marketing, and digital environments.

Why This Story Matters Now

Houser’s shift from video game narratives to speculative fiction highlights a significant cultural moment. In his games, participants traversed realms filled with ethical dilemmas, hierarchies, and commentary on society. In his book, those themes develop into a reflection on the essence of being human when intelligent systems can influence thought itself.

By weaving his message into a compelling story, Houser encourages readers to confront pressing questions: Who holds power over technology when it surpasses our comprehension? How can we maintain free will in an era of customized digital experiences? And what occurs when the tools created to assist us start to shape our identity?

Considered either a warning story or imaginative fiction, A Better Paradise represents a striking new phase for a creator recognized for challenging limits. It highlights that as AI becomes more advanced, the crucial boundary might not be the machines, but rather the human intellect they engage with.

Are AI Prompts Damaging Thinking Skills?

As artificial intelligence (AI) tools like ChatGPT explode in popularity, a growing chorus of experts is asking a worrying question: Are these AI prompts damaging our thinking skills? While AI can help us find information instantly, there are legitimate concerns that too much reliance on these tools may weaken our ability to think critically, evaluate information, and solve problems independently.

AI Prompts: Helpful Tool or Cognitive Shortcut for Thinking Skills?

AI tools aim to simplify life by swiftly providing answers and solutions. However, ease of use sacrifices cognitive ability. When individuals delegate cognitive tasks to machines, like completely depending on AI to compose essays or address issues, they typically interact less with the content. This method, referred to as cognitive offloading, implies that the brain exerts less effort and might not enhance essential cognitive abilities as time progresses. Research indicates a negative relationship between regular AI usage and critical thinking skills, where cognitive offloading serves as a significant mediator.

Experts at institutions like Duke University note that while AI can analyse data rapidly, overreliance on these systems can erode individual critical thinking and reasoning skills if they are used as a crutch rather than an aid. 

Evidence from Research and Brain Studies

A debated study from MIT’s Media Lab indicates that utilizing generative AI tools might diminish brain involvement when performing cognitively challenging tasks. In activities such as essay writing, individuals who depended significantly on ChatGPT exhibited decreased neural activation in areas linked to attention, planning, and memory when contrasted with those who worked independently, suggesting that excessive AI usage might dull intellectual involvement.

Even though these results are initial and derived from limited samples, they reflect wider academic issues. Studies published in journals such as Springer’s Smart Learning Environments indicate that excessive reliance on AI may diminish analytical and critical thinking skills, particularly when students uncritically embrace AI-generated results.

Not All AI Impacts Are Negative

It is essential to acknowledge that AI does not intrinsically harm cognitive skills. Certain studies indicate that AI can improve cognitive functions when employed intentionally. For example, meta-analyses indicate that AI can aid students in evaluating information, building arguments, and exploring various viewpoints, but this is effective only when users are actively involved rather than passively accepting AI responses.

The essential aspect is equilibrium and purpose: AI ought to serve as a collaborator in thought rather than a substitute for the thought process.

Educators and cognitive scientists contend that the possible damage from AI is not unavoidable. Rather than prohibiting AI, we ought to create systems and practices that foster critical involvement. For instance, incorporating metacognitive prompts that encourage users to contemplate and assess AI results can promote deeper thought instead of bypassing it. In educational environments, fostering AI literacy and instructing individuals on the thoughtful use of AI can aid in reducing the risk of cognitive decline while retaining the advantages of quick information access.

Conclusion

The debate about AI and thinking skills is not settled, but early evidence suggests heavy reliance on AI prompts can weaken critical thinking if users become passive consumers of machine-generated answers. Instead of abandoning AI, we should focus on how to use these tools to enhance rather than replace human thought, encouraging active engagement, reflection, and informed judgement.

Reimagining Banking with AI & Cloud Design

As the financial world continues evolving, banks are no longer just places to deposit money or apply for loans. Today, they are becoming intelligent, customer-centric platforms powered by advanced technology and human-centred design principles. The future of banking lies in the combined force of Artificial Intelligence (AI), cloud computing, and design thinking, a trio that is reshaping how banks operate, innovate, and engage with customers.

From Transactions to Transformation

Historically, banks relied on bricks-and-mortar branches and manual processes. Over the past few decades, digital channels such as online and mobile banking have expanded access and convenience. But as Arun Jain of Intellect describes it, we are now entering the “fifth wave of banking,” where institutions must move beyond digitising old processes to completely rethink how value is delivered to customers.

This transformation is not simply about upgrading technology. It is about shifting from a product-first mindset to one that is customer-first, building solutions around the real events of people’s lives, from everyday spending to long-term financial planning. For example, banks can integrate AI to anticipate needs, personalise services, and automate routine interactions in ways that feel seamless to customers.

AI: The Engine of Intelligent Banking

AI has transitioned from being an optional trial to being a fundamental operational necessity. McKinsey states that banks that genuinely leverage AI go beyond just implementing chatbots or automating basic tasks, they reshape entire business operations with AI as the focal point.

This entails rethinking risk evaluation, client interaction, adherence to regulations, and even product development with AI systems capable of analyzing data instantaneously, offering predictive insights, and aiding decision-making throughout the organization. When implemented correctly, AI can enable banks to function more effectively and provide exceptionally tailored customer experiences that compete with those from digital-only rivals.

McKinsey highlights that many banks remain in the experimentation stage, with AI trials dispersed throughout various functions. To realize AI’s complete potential, organizations need to progress past individual trials and implement a comprehensive strategy that aligns technology initiatives with organizational objectives and consumer demands.

Cloud Technology: The Foundation of Flexibility

Cloud computing complements AI by giving banks the scalable infrastructure they need to store and process massive volumes of data. Cloud platforms enable faster development cycles, lower operating costs, and more reliable delivery of digital services, essential conditions for banks aiming to innovate at speed.

By decoupling applications from legacy systems, banks can deploy new features, experiment with services, and integrate third-party tools rapidly. This agility is critical in a world where customer expectations are shaped by the instantaneous experiences provided by tech giants in other industries.

Design Thinking: Human-Centred Innovation

Technology alone is not enough. To reap the benefits of AI and cloud, banks must adopt design thinking, a problem-solving approach that starts with understanding human needs. Rather than retrofitting customers into products, design thinking pushes organisations to observe real behaviours, prototype solutions, and iterate based on feedback. 

In practice, this means building banking experiences that are intuitive, transparent, and empathetic. For instance, an AI-driven budgeting tool designed through user research will address real pain points like savings anxiety or financial literacy gaps, rather than simply presenting dashboards of numbers.

Balancing Innovation, Trust, and Governance

As financial institutions hurry to embrace AI and cloud technology, they also need to tackle risks like data privacy, ethical AI usage, and adherence to regulations. Responsible innovation necessitates governance structures that guarantee AI decisions are understandable, equitable, and safe, particularly when they have a direct impact on customers’ financial well-being.

Lloyds Banking Group’s experience illustrates how a centralized playbook and governance structure can facilitate uniform AI implementation while ensuring responsible supervision.

The combination of AI, cloud computing, and design thinking signifies not just a tech enhancement, it represents a core transformation in banking. Banks that effectively combine these components will manage operations more efficiently, gain deeper insights into customers, and provide experiences that are genuinely personalized and pertinent.

As the sector progresses, the organizations that succeed will be those that view AI and cloud not as standalone tools but as strategic facilitators of value, integrating them with human-centered design that places customers at the core of innovation.

UPS Deploys AI Against Fake Returns

The holiday season is meant to be a boost for retailers, but behind the scenes it has become one of the most expensive times of the year due to a sharp rise in return fraud. As shoppers send back millions of items after Christmas, logistics giant UPS has begun deploying artificial intelligence (AI) to help retailers identify fake and fraudulent returns before they become costly losses.

This move reflects a broader shift in retail logistics: returns are no longer treated as a simple customer service issue, but as a major operational and financial risk that requires advanced technology to manage.

Why Return Fraud Is a Growing Problem

Return fraud occurs when customers seek refunds but return an item that differs from their original purchase, like a fake product, a less expensive alternative, or even an empty container. Although this has been present for years, the issue has escalated quickly with the rise of online shopping and more lenient return policies.

In 2025, American shoppers are projected to send back almost $850 billion in products, accounting for roughly 16% of overall retail sales. Unfortunately, approximately 9% of those returns are believed to be deceptive, resulting in losses of tens of billions of dollars for retailers each year.

The issue becomes more pronounced during holiday shopping. Retailers face pressure to swiftly handle refunds to satisfy customers, resulting in reduced time for manual checks. Fraudsters exploit this rapidity, aware that inundated systems are prone to overlook discrepancies.

UPS’s AI Solution: Smarter Returns, Not Slower Ones

To address this challenge, UPS, through its returns subsidiary Happy Returns, has introduced an AI-powered system known as Return Vision. The tool is currently being piloted with major apparel brands such as Everlane, Revolve, and Under Armour.

Rather than replacing human workers, the AI acts as an early warning system. It scans data linked to returns and flags transactions that show unusual or suspicious patterns. These might include:

  • Returns initiated before an item is officially delivered
  • Multiple returns tied to linked email addresses or accounts
  • Packaging or item characteristics that do not match the original order
  • High-value items returned repeatedly by the same customer

By identifying risk early, retailers can focus their attention on the small fraction of returns most likely to be fraudulent, instead of inspecting everything.

How the System Works in Practice

Happy Returns operates a “no-box, no-label” return network with around 8,000 return bars located inside stores such as Ulta Beauty, Staples, and UPS locations. Customers simply bring their item, which is scanned and bundled with others for shipment.

At processing centers, AI-flagged returns are separated for closer inspection. Human auditors then open these packages, photograph their contents, and compare them against what was originally sold. These images and outcomes are fed back into the system, allowing the AI to learn and improve over time. 

Interestingly, fewer than 1% of all returns are flagged, but roughly 10% of those flagged cases turn out to be genuine fraud. With the average fraudulent return valued at around $260, even a small detection rate can translate into substantial savings.

Why AI Matters More Than Manual Checks

Traditional return checks rely heavily on staff experience and random inspections. While effective at small scales, this approach struggles when millions of packages arrive within a short period. AI excels here because it can:

  • Process vast amounts of data instantly
  • Detect patterns humans might overlook
  • Apply consistent criteria without fatigue
  • Continuously improve with feedback

For UPS, this technology also strengthens its position as more than a delivery company. It positions itself as a strategic partner in retail operations, offering data-driven solutions that protect revenue.

Limits of AI and Remaining Challenges

Despite its benefits, UPS acknowledges that AI is not a silver bullet. Some forms of fraud, such as “wardrobing,” where customers wear items and return them, are still extremely difficult to detect automatically.

There is also a delicate balance to maintain. Over-aggressive fraud detection risks falsely accusing honest customers, which can damage trust and brand loyalty. This is why UPS emphasizes human oversight alongside AI, ensuring that final decisions remain contextual and fair.

For retailers, AI-enabled returns management could significantly reduce losses, protect profit margins, and make return policies more sustainable in the long term. It also helps justify generous customer-friendly return policies without leaving companies exposed to abuse.

For consumers, the impact may be subtle but important: faster processing for legitimate returns, fewer blanket restrictions, and a system that targets abuse rather than penalizing everyone.

A Glimpse into the Future of Retail Logistics

UPS’s deployment of AI reflects a broader trend in logistics and supply chains, where technology is increasingly used not just to move goods, but to protect value and integrity across the retail lifecycle.

As e-commerce continues to grow and fraud becomes more sophisticated, AI-driven tools like Return Vision are likely to become standard rather than exceptional, reshaping how retailers think about returns, trust, and efficiency.

Satya Nadella on Why EQ Matters in AI

As artificial intelligence (AI) becomes more capable of handling complex technical tasks, the human traits that once seemed secondary are suddenly front and center. Satya Nadella, CEO of Microsoft, argues that in today’s AI-accelerated workplace, emotional intelligence (EQ) matters more than intelligence quotient (IQ).

The Context: Why the Shift from IQ to EQ

  • As AI tools automate data-heavy, repetitive cognitive tasks, from data analysis to decision support, the competitive edge is shifting. According to Nadella, technical skill alone will no longer distinguish the most effective professionals.
  • In this new reality, human strengths such as empathy, collaboration, social awareness, and the ability to navigate uncertainty become indispensable. These are precisely the domains where AI still lags behind humans.

Nadella’s message is simple but powerful: IQ still has a place, but without EQ, it is underused. “IQ has a place, but it is not the only thing that is needed in the world,” he said. “If you have IQ without EQ, it is just a waste of IQ.”

What Nadella Means by EQ and Why It Matters

Empathy & Human Understanding

For Nadella, empathy is not a soft add-on; it is a core leadership tool. He believes that leaders who understand their people, their motivations, fears, strengths, can inspire better performance, innovation, and loyalty. 

In his own words, true innovation often springs from empathy: uncovering unspoken customer needs and designing solutions that resonate deeply.

Navigating Complexity & Human Relationships

AI can crunch numbers, optimize schedules, or even write code, but it can not read a room, sense when someone is struggling, or mediate conflict. Nadella suggests that EQ helps leaders navigate ambiguity, build trust, and lead teams through change, especially in times of rapid technological disruption. 

Collaboration, Communication & Culture

According to Nadella, as workplaces become more automated and remote-friendly, purposeful human connection becomes more important. He emphasizes that social intelligence: empathy, communication, collaboration, will shape whether teams thrive or fall apart. 

Moreover, at Microsoft under his leadership, the organizational culture reflects this philosophy: flattening hierarchies, promoting psychological safety, encouraging people to own mistakes, and valuing humility over ego.

The Broader Significance: What This Means for Professionals & Organizations

  • For individuals: It is no longer enough to be technically good. To stay relevant in an AI-driven environment, you also need interpersonal skills: empathy, communication, adaptability. These human skills will likely determine who leads, who innovates, and who builds lasting teams. 
  • For leaders: Leading with empathy and purpose becomes a competitive advantage. Organizations led by people who understand human needs, not just systems, may outperform those solely focused on metrics.
  • For workplaces: This shift can redefine hiring, training, performance evaluation, with more weight on “soft skills” like EQ, collaboration, and emotional awareness rather than just technical credentials.

In essence, the age of AI is not the end of human relevance, it is a transformation of what it means to be valuable.

Why Nadella’s Perspective Matters, Even If You are Not a Tech Worker

You do not have to work at Microsoft or in AI to see the logic of Nadella’s message. Across sectors, business, education, healthcare, creative industries, automation is creeping in. Those jobs that remain human-centric will prioritize what machines can not replicate: empathy, communication, cultural awareness, emotional depth.

And for many people, this is good news. It means technical skill is not the only route to success. Emotional depth, empathy, leadership, collaboration, these may become the new markers of real career resilience and impact.

Conclusion

Satya Nadella’s assertion is clear: as AI transforms the workplace, the human edge lies not in raw IQ, but in emotional intelligence. Machines may get smarter, that is inevitable, but they will not replace what makes us human: empathy, compassion, understanding, connection.

For leaders, it is a reminder: success is not just about coding or data-crunching. It is about people. For professionals, it is an invitation: invest in empathy, social intelligence, emotional awareness. Because in the age of AI, those human skills might just matter more than anything else.

How AI Helps Choose the Best IVF Embryos

For many couples and individuals facing fertility challenges, invitro fertilisation (IVF) offers hope, but the journey is often long, emotionally taxing, and uncertain. One of the hardest steps in IVF is deciding which embryo to transfer into the uterus: not all embryos will implant successfully, and many possible pregnancies never happen. Now, thanks to advances in artificial intelligence (AI), scientists and doctors are gaining a powerful new assistant, one that helps identify which embryos are most likely to result in a healthy pregnancy.

Why embryo selection matters

During IVF, sperm fertilizes an egg in a laboratory dish. The resulting embryos are observed over a period of days. Embryologists evaluate them by appearance: their cell structure, how evenly they divide, and how quickly they reach certain developmental stages. Those that meet criteria are graded, and one embryo (or sometimes more) is chosen for transfer. 

This selection is crucial, because a “good-looking” embryo does not always mean a successful pregnancy. Many embryos may seem fine under a microscope yet fail to implant or result in miscarriage. Traditional selection relies heavily on human judgment, which is good, but subjective and limited. 

A Smarter, Data-Driven Approach

Recent breakthroughs combine two powerful tools: time-lapse imaging and AI-based analysis.

  • Time-lapse imaging uses specialized incubators with built-in cameras that photograph each embryo regularly (e.g., every few minutes) as it develops. This produces a continuous “video” of embryo growth from fertilization through cell divisions to the blastocyst stage. Importantly, the embryos remain undisturbed in a stable environment, improving their chances of healthy development.
  • AI algorithms, often using deep learning, then analyze these time-lapse videos, looking for subtle patterns in how embryos divide, how quickly, how evenly, how their cells organize, etc. These patterns might predict which embryos are most likely to implant and lead to a successful pregnancy. AI does not just rely on what the human eye can see, but can detect minute features and dynamics that humans might miss. 

In practice, this means when a fertility clinic has several viable embryos from an IVF cycle, AI tools can rank them, from most to least promising, giving embryologists data-driven guidance. For example, one tool developed by a UK fertility network analyzes time-lapse images and ranks embryos based on potential for live birth. 

What is the Promise  and What is Still Unknown?

The Impact of AI

  • AI reduces the subjectivity and variation that come from human grading. The same embryo evaluated by different embryologists or even the same one at different times, can yield different grades. AI helps standardize that. 
  • Time-lapse imaging captures many more data points than traditional periodic checks. AI can make sense of this huge, complex dataset to find patterns humans could not.
  • By selecting the embryo with the highest chance of success, AI might help reduce the number of failed attempts, saving emotional, financial, and physical cost for hopeful parents.

What remains uncertain or controversial:

  • Scientific evidence is still evolving: While early studies are promising, major clinical trials demonstrating improved live-birth rates due to AI-based embryo selection are still limited. Some experts argue more research is needed before widespread adoption.
  • Ethical and social concerns: Letting algorithms help decide which embryos get transferred raises deep ethical questions. Who is responsible if the “wrong” embryo is chosen? Could this lead to new forms of discrimination, inequality or “technological bias”? There is concern about loss of human judgement, “deskilling” embryologists, and transparency when algorithms are “black boxes”.
  • Some fear AI could be misused or become another tool for “designer baby” thinking, though strictly speaking, current AI embryo selection does not involve changing genes; it simply helps pick among already-created embryos.
  • Access and equity: AI-powered IVF may be expensive and available only in high-end clinics, potentially making advanced fertility care less accessible to people in low-resource settings.

Real-World Use: Who is Using This Already?

Some fertility clinics are already offering AI-assisted embryo selection as part of their IVF services. For instance:

  • The AI platform EMBRYOAID, from a fertility-tech company, integrates with standard lab equipment and helps embryologists rank embryos based on viability, reportedly outperforming average embryologist accuracy in certain studies.
  • Care Fertility, a network of fertility clinics in the UK, uses another system called Caremaps Ai that analyzes thousands of time-lapse images and ranks embryos from 1 to 10 by their predicted likelihood of successful implantation.
  • Some clinics, including the American Hospital of Paris, have begun integrating AI tools into their IVF programs. The hospital’s staff say AI provides additional insight, helping decide which embryos to transfer or freeze, and potentially reducing the number of unsuccessful IVF cycles. 

Conclusion

AI tools for embryo selection in IVF represent one of the most exciting frontiers where technology meets human desire for parenthood. By turning what used to be a largely subjective decision, “Which embryo looks best under a microscope?”, into a data-driven, evidence-supported process, AI offers hope for better success rates, fewer failed cycles, and less emotional stress.

That said, AI is not a guarantee. There is no certainty yet that it will always result in live births or eliminate all risks. What it does, today, is help embryologists make better-informed, more consistent, and potentially more successful choices.

As research continues and ethical frameworks evolve, AI could become a routine part of IVF worldwide, but only if implemented with care, transparency, and respect for the deeply personal nature of fertility and family building.

How People Use AI: Real Usage Insights

When artificial intelligence (AI) first became popular, numerous people envisioned a future brimming with self-operating robots, completely automated workplaces, and devices taking over human choices. However, when investigators examined the reality and assessed billions of actual AI interactions, a different narrative unfolded, one that is much more human, pragmatic, and unexpected.

AI is less about replacing humans and more about helping them think

One of the clearest findings from large-scale studies of AI usage is that most people do not use AI to hand over control, but to support their own thinking. Instead of asking AI to make final decisions, users typically ask it to explain concepts, refine ideas, suggest options, or check their work.

Students use AI to understand difficult topics, not to skip learning altogether. Professionals use it to draft emails, summarise documents, or brainstorm ideas, then refine the output themselves. This shows that AI is most valuable as a thinking partner, not a replacement for human judgement.

Everyday tasks dominate AI use

Despite headlines about advanced research and complex automation, the most common AI uses are surprisingly simple. Analysed interactions reveal that people mainly rely on AI for:

  • Writing and editing text
  • Summarising long documents
  • Role play, creative storytelling and companionship
  • Answering general knowledge questions
  • Generating ideas or outlines
  • Translating or simplifying information

These are not futuristic tasks, they are everyday cognitive chores that take time and mental energy. AI is being used as a productivity shortcut, helping people work faster rather than fundamentally changing what they do.

People trust AI, but cautiously

Another important insight is how careful users are with trust. While people rely on AI frequently, they rarely accept its output without question. Many users double-check facts, edit responses, or ask follow-up questions to confirm accuracy.

This behaviour suggests that users understand AI’s limitations. Rather than blindly trusting it, they treat it like a helpful assistant that can make mistakes, useful, but not infallible. This cautious approach is especially common in areas like education, healthcare information, and workplace decisions.

AI use is highly personal and context-driven

There is no single way people use AI. Patterns vary widely depending on age, profession, and personal goals. For example:

  • Students use AI as a tutor or study aid
  • Office workers use it as a writing and organisation tool
  • Creatives use it for inspiration, not final output
  • Managers use it to clarify ideas and speed up communication

What is striking is that AI adapts to how people already work, rather than forcing them into new behaviours. This flexibility helps explain why adoption has been so fast across different sectors.

The biggest impact is cognitive relief

Perhaps the most surprising finding is that AI’s biggest benefit is not technical, it is mental. Users report that AI reduces stress, decision fatigue, and information overload. By handling first drafts, summaries, or explanations, AI frees people to focus on judgement, creativity, and problem-solving.

In other words, AI is less about doing our jobs for us, and more about removing the friction that makes work exhausting.

The real story of AI use is not about machines taking over, like critics expect. It is about humans using tools more effectively. The data shows that AI succeeds when it is assistive, understandable, and supportive, not when it tries to act independently. For businesses, educators, and policymakers, this insight matters. The most successful AI systems will be those designed to augment human ability, respect human oversight, and fit naturally into daily life.

The future of AI, it turns out, is not robotic at all, it is deeply human.

Accenture Partners With Anthropic on AI

Artificial intelligence (AI) is moving fast, but for many organisations, actually using it in day-to-day work remains a challenge. That gap between promise and practice is exactly what global consulting giant Accenture and AI company Anthropic are aiming to close with a new multi-year strategic partnership designed to accelerate real-world AI adoption across industries.

Turning powerful AI into practical tools

Anthropic is best known for Claude, its family of large language models designed with a strong focus on safety, reliability, and enterprise use. Accenture, on the other hand, works with thousands of organisations worldwide to modernise operations, improve customer experiences, and deploy new technologies at scale.

By teaming up, the two companies want to make advanced AI systems easier for businesses to adopt, not as experiments, but as tools embedded into everyday workflows. The partnership focuses on helping organisations design, deploy, and govern AI systems responsibly, while ensuring they actually deliver business value.

What the partnership includes

Under the agreement, Accenture will integrate Anthropic’s AI models into its AI Navigator and GenAI services, using them to support clients in areas such as customer service, software development, data analysis, and decision-making. Accenture also plans to train thousands of its professionals on Anthropic’s technology so they can help clients implement AI faster and more effectively.Anthropic benefits by gaining access to Accenture’s global reach and deep industry expertise, allowing its models to be tested and refined in real enterprise environments, from finance and healthcare to retail and manufacturing.

Why this matters for businesses

Many companies are excited about generative AI but struggle with questions like: Where do we start? How do we use AI safely? How do we integrate it with existing systems? This partnership directly targets those concerns.

Accenture brings experience in large-scale transformation and compliance, while Anthropic provides AI models designed to be transparent, controllable, and aligned with human values. Together, they aim to reduce risks such as data misuse, hallucinated outputs, and regulatory headaches, issues that often slow AI adoption.

The rise of “AI integrators”

Industry analysts note that this deal reflects a broader trend: the growing importance of AI integrators. Instead of building everything in-house, companies increasingly rely on partners that can combine powerful AI models with strategy, governance, and change management.

In this sense, Accenture is positioning itself as a bridge between cutting-edge AI research and everyday business reality, while Anthropic strengthens its role as a trusted provider of enterprise-ready AI.

A signal of where AI is heading

The Accenture–Anthropic partnership signals a shift in the AI conversation. The focus is no longer just on who has the most powerful model, but on who can turn AI into measurable results, safely, responsibly, and at scale.

For non-tech audiences, the takeaway is simple: AI is moving out of labs and demos and into real workplaces. Partnerships like this are helping ensure that when businesses adopt AI, it actually works for people, not the other way around.

Site logo

* Copyright © 2024 Insider Inc. All rights reserved.


Registration on or use of this site constitutes acceptance of our


Terms of services and Privacy Policy.