Home Blog Page 14

Top Web3 UX Challenges Slowing Adoption: Real User Struggles with Blockchain Usability

0

Web 3.0, commonly referred to as “Web3”, the buzzword behind blockchain, cryptocurrency, non-fungible tokens (NFTs), and the so-called “next internet,” promises a lot: 

  • more control for users, 
  • fewer middlemen, and 
  • ownership of digital assets. 

But for most people, trying to use a Web 3.0 application feels more like trying to crack a secret code than experiencing a revolution.

The vision is bold—but the experience? Often clunky, confusing, and downright frustrating. So, why does Web 3.0 still feel difficult, even for people who want to embrace it? 

Join us as we explore the roadblocks users are facing.

Wallets: The First Frustration

In Web 3.0, everything starts with a wallet—your digital ID and your vault. But setting one up is not as simple as signing into Google. Instead, users are asked to download a browser extension or an app like MetaMask, create a seed phrase (a long string of random words), and store it somewhere safe. No password reset option here. If you lose your seed phrase, you lose everything.

A UX researcher from a 2023 Reddit thread shared this:

“We tested a decentralised (dApp) with first-time users. Most gave up at the wallet step. One said, ‘I feel like I am opening a bank account in another language.’”

Wallets are also fragmented. You may need different wallets for different blockchains (Ethereum, Solana, etc.), and some do not work on mobile. It is no wonder people feel overwhelmed before they even begin.

Gas Fees: Unexpected Charges That Drive Users Away

Once you get a wallet and want to do something—buy an NFT, send cryptocurrency, or vote in a decentralized autonomous organization (DAO)—you will often hit another wall: gas fees. These are transaction fees users pay to use the network. On Ethereum, they can jump from a few cents to $100+ depending on network activity.

In Web 2.0, users expect clear, predictable prices. In Web 3.0, the cost is not only confusing, it is constantly changing—and that breaks user trust.

Security Fears and No Customer Support

Web 3.0 gives users control, but with that comes risk. There is no “Forgot Password” button, no customer service line. Lose your wallet credentials? You are locked out forever. Fall for a scam link? Your assets are gone.

One user shared their experience on Wired:

“I clicked a link to mint a free NFT. It emptied my wallet. I did not even know what happened until hours later.”

Web 3.0 platforms still lack user-friendly protections like fraud warnings or confirmation prompts. To newcomers, it feels like walking a tightrope without a net.

Too Much Jargon, Not Enough Guidance

Try explaining “staking,” “bridging,” or “yield farming” to someone new, and you will see their eyes glaze over. Many Web 3.0 platforms assume a level of knowledge most users simply do not have.

A usability study published by Arounda Agency revealed that:

“Users often feel like they are a developer tool, not a consumer product.”

There are rarely tooltips, walkthroughs, or simple instructions.

Lack of User-Focused Design

Web 3.0 tools are often made by developers for developers. The design feels like an afterthought. Things like buttons, messages, and layouts can be unclear. In traditional tech, these issues would be caught and fixed through user testing. But in Web 3.0, the race to launch sometimes overrides usability.

What Needs to Change?

  1. Simplify Onboarding: New users should be guided like beginners, not expected to know cryptocurrency lingo or security practices from day one.
  2. Clear Fees: Show gas fees upfront and in simple terms. Offer suggestions like using cheaper times of day.
  3. Security Help: Include fraud alerts, help centers, and step-by-step recovery tools.
  4. Better Education: Glossaries, walkthroughs, and “learn as you go” design can make a huge difference.

User Testing: Build products that are actually tested with users—not just cryptocurrency enthusiasts.

Final Thoughts

Web 3.0 has powerful potential. But right now, it often feels like a technological demo—built for early adopters and engineers. For it to truly become what it is projected as, the user experience must become just as revolutionary as the technology behind it.

Until then, many users will continue to peek into the Web 3.0 world, only to walk away saying, “I do not get it.”

The Rise of Deepfakes: How AI-Generated Media Threatens Online Trust in 2025

Imagine watching a video of a celebrity or politician saying something shocking—only to find out they never said it. Or hearing a familiar voice on the phone asking for help with money, only to realize it was fake. That is the world we are stepping into, thanks to a new type of technology called deep fakes.

So, What Are Deepfakes?

Deepfakes are videos, images, or even audio clips that are entirely fabricated yet appear and sound authentic. They are created with advanced computer software and artificial intelligence (AI). Using only a handful of pictures or sound recordings, an individual can fabricate a phony video of you talking or engaging in activities you never actually performed.

Initially, this may seem entertaining—similar to those apps that depict how you would appear as a cartoon or how you could age. However, deep fakes extend far beyond that, and not always positively.

Real Problems Caused by Deepfakes

Targeting People Personally

One of the worst things about deepfakes is how they are used to hurt individuals. Many women have found fake videos online that make it look like they are in adult content—when they never were. These videos are often shared without their permission, causing embarrassment, fear, and emotional pain.

Scamming and Tricking People

Scammers have started using deepfakes to steal money. They might fake the voice of your boss or a family member and ask you to send money urgently. Some workers have even been tricked into sending thousands of dollars because they thought a fake video call or email was real.

Spreading Lies in Politics

Fake videos have also been used to try to influence elections or damage reputations. A deepfake might show a politician saying something offensive or illegal, just to sway voters—even if it is completely false. This makes it hard to know what is real during important events like elections.

What is Being Done About It?

Technology Solutions to Detect Forgeries

Fortunately, intelligent individuals in technology are also developing tools that can detect deep fakes. These programs search for small hints—such as unusual blinking sequences or peculiar lighting—to determine whether a video is genuine or fraudulent. However, because deep fakes continually advance, it resembles a competition between those creating them and those attempting to prevent them.

Fresh Regulations and Guidelines

Certain governments are beginning to enact legislation to penalize individuals who utilize deep fakes to cause harm to others. This involves creating counterfeit videos to harass someone, disseminate falsehoods, or deceive individuals for financial gain. However, in numerous locations, the regulations are still lagging behind the technology.

The best defense right now? Awareness. If more people understand what deepfakes are and how to spot them, fewer people will fall for them. This means checking facts, not trusting everything you see on social media, and being cautious before you share videos or click suspicious links.

So… Can we still trust what we see?

It is getting harder, but not impossible. We just have to be smarter and more careful. In the past, we believed the phrase “seeing is believing.” But now, with deep fakes becoming more common, we need to ask more questions: Who posted this? Where did it come from? Could it be fake?

As technology grows more powerful, so does our need to think critically. Whether we are watching a video online or getting a strange phone call, taking a moment to pause and verify might save us from falling into a digital trap.

Meet Vulcan: Amazon’s First Robot That Can Actually Feel Things

Amazon has unveiled Vulcan, its first warehouse robot equipped with a sense of touch, marking a significant advancement in robotics and warehouse automation.

A New Era of Robotic Dexterity

Most robots in warehouses use cameras or sensors to see where objects are and move them. But they are not very gentle—grab a box too hard, and it might get squished. Pick up something delicate, and it could break.

Vulcan changes that. It has special sensors in its arms and hands that allow it to feel how much pressure it is using—just like you do when picking up an egg versus a book. This means Vulcan can handle all sorts of items, from soft packages to oddly shaped products, without damaging them.

Amazon says Vulcan is already able to handle about 75% of all the different things in its warehouses.

Enhancing Human-Robot Collaboration

Vulcan is created to assist human workers by handling physically strenuous tasks. For example, it can fetch items from both high and low storage units, lessening the requirement for workers to use ladders or bend down, which helps to decrease physical strain. This partnership enables human employees to concentrate on duties that demand greater decision-making and supervision.

Deployment and Future Plans

Right now, Vulcan is being tested in a couple of Amazon warehouses—one in Spokane, Washington, and another in Hamburg, Germany. If things go well, Amazon plans to bring it to more locations across the U.S. and Europe.

Even though robots like Vulcan are becoming more common, Amazon says it still needs people for things like setting up the machines and keeping them running smoothly.

Implications for Warehouse Operations

This new robot is a big step forward. By giving robots a sense of touch, Amazon is moving closer to creating machines that can safely and reliably handle just about anything in a warehouse. It is not about replacing workers—it is about making the work safer and smarter.

So, the next time you get a package from Amazon, there is a chance it was gently handled by a robot with a pretty good sense of touch.

7 AI Chatbots Ranked by How Much Data They Collect from You

AI chatbots are everywhere these days—helping you write emails, answer questions, or even plan your day. But have you ever stopped to wonder how much personal information they have access to  while they help you?

This article sheds more light on that. It ranks some of the most popular AI chatbots by how much data they collect from users—and the differences are pretty eye-opening.

Top 7 AI Chatbots by Data Collection

The study evaluated various AI chatbots based on the number of personal data points they collect. Here is how they rank:

Gemini (Google) – Collects 22 data points

Google’s Gemini collects the most personal information out of all the chatbots on the list. It grabs your name, email, phone number, where you are, what you type into it, files you upload, and even keeps track of what you browse online and what you buy. It uses all this info to give you a personalized experience—but that also raises big questions about your privacy.

Claude (Anthropic) – Collects 13 data points

Claude collects a good amount of personal data, including your name, location, and what you say to the bot. However, it does not go as far as Gemini—it does not seem to track your shopping or browsing history as closely, which makes it a bit more privacy-friendly.

Copilot (Microsoft) – Collects 12 data points

Microsoft’s Copilot AI gathers information concerning your content, interactions, devices, and usage behaviors. This may involve your tasks, questions, and also app-related information to enhance its services. Gathering device details and browsing habits contributes an extra level of personalization, although it is quite mild when compared to other platforms such as Gemini.

DeepSeek – Collects 11 data points

DeepSeek aligns somewhat with Copilot but gathers a smaller number of data points in total. It collects crucial elements such as user content, location, and device details, yet it appears less intrusive regarding the monitoring of browsing and purchasing habits.

ChatGPT (OpenAI) – Collects 10 data points

ChatGPT mainly focuses on your conversations—what you ask and how you use the chatbot. It also collects some info about your device. OpenAI gives you some control over what is stored, and while they do use conversations to improve the AI, they are more transparent than some others.

Perplexity – Collects 10 data points

Perplexity collects similar types of data as ChatGPT—what you say, what device you use, and how often you interact with it. It does not seem to follow your shopping habits or track what websites you visit, which keeps it relatively moderate on the privacy scale.

Grok (xAI) – Collects 7 data points(Business Day)

Grok collects the least amount of data. It mostly grabs basic stuff like your name and how you use the bot—but it does not keep tabs on what you are shopping for or browsing online. If you care a lot about privacy, Grok is the least invasive option on this list.

What Kind of Data Is Being Collected?

The data points collected by these chatbots encompass a wide range of personal information:(LinkedIn)

  • Personal Identifiers: Name, email address, phone number
  • Location Data: Geographical information based on IP address or GPS
  • User Content: Conversations, queries, and shared documents
  • Usage Patterns: Interaction frequency, time spent, and feature usage
  • Device Information: Device type, operating system, and browser details

Notably, some chatbots, like Gemini and Perplexity, also track user purchases and browsing history, raising further privacy concerns. 

Why This Matters

The more data a chatbot collects, the better it can personalize your experience. But that also means more of your private info is being stored somewhere—and possibly used in ways you did not expect.

That could mean:

  • More targeted ads
  • Your data being used to train AI models
  • Greater risk if there is a data breach

So if you care about privacy, you might want to pick a chatbot that keeps data collection to a minimum—or at least one that gives you control over what is shared.

What Can You Do?

If you are not comfortable handing over a bunch of data:

  • Choose more private chatbots, like Grok, that collect less
  • Check the privacy settings of the chatbot you use—many let you limit what they track
  • Read the fine print (yes, it is boring, but it helps) to know how your data is used

Think before you share—avoid typing sensitive info into a chatbot unless you know how it is handled

Bottom Line

AI chatbots can be super helpful—but they do not come free. You are often paying with your personal data. By understanding how much information each chatbot collects, you can make smarter choices about which ones you trust.

After all, it is your data. You should decide who gets to see it.

Top 5 Companies Dominating the Web3 Space in 2025

0

If you have ever heard someone mention “Web3” and thought, “That sounds techy” — you are not alone. Web3 is the next version of the internet, where power shifts away from big tech companies and into the hands of everyday users through blockchain, decentralization, and digital ownership.

Think of it like moving from renting a house (Web2, where Facebook and Google own your data) to owning your own place (Web3, where you control your identity and assets). While popular names like Bitcoin and Ethereum grab headlines, several companies are quietly building the digital roads, bridges, and neighborhoods of this new online world.

Here are five Web3 companies making big moves behind the scenes — without always making big noise.

Worldcoin (by Tools for Humanity)

Worldcoin wants to make sure people — not bots — are the ones using online services, especially in an AI-centered future. Their device, the “Orb,” scans your eye (iris) to create a digital ID just for you. That ID helps prove you are a real human, not a fake online profile.
This could be used in the future to fairly give out Universal Basic Income (UBI) — a kind of free token for everyone — especially in a world where AI might take many jobs. They have also launched a crypto wallet and are teaming up with Visa to bring this tech to payments.

Chainlink Labs

Chainlink helps blockchains connect to real-world data — like prices, sports scores, or weather — through a system called “oracles.” Think of oracles like bridges that bring external facts into smart contracts. This is super important for DeFi (Decentralized Finance), where a smart contract might need to know the exact price of Bitcoin before making a trade.
They have helped support over $20 trillion in blockchain transactions, which is a huge chunk of the Web3 economy.

GoMining

GoMining makes it easy for regular folks to get into Bitcoin mining. Normally, mining requires expensive machines and lots of electricity. Instead, GoMining offers NFTs linked to actual mining hardware. When you own one, you earn small amounts of Bitcoin daily — kind of like owning a tiny part of a digital gold mine. They also created a blockchain-based game called Miner Wars, blending gaming with crypto mining.

GGEZ1 Foundation DAO

This group runs as a DAO — which means there is no CEO or boss, just community members making decisions together using blockchain votes. GGEZ1 focuses on building tech that helps decentralize the web, so power is not controlled by just a few companies. It has been recognized for its innovation and was named one of HackerNoon’s top Web3 startups in 2024.

U2U Network

U2U is building what is called a modular blockchain — basically a more flexible, scalable kind of blockchain. They are also working on DePIN, which means Decentralized Physical Infrastructure Networks. This platform aims to simplify the exchange of digital assets like internet bandwidth, storage capacity, and valuable data, serving a wide variety of users, from small companies to major government organizations. Their technology helps other Web3 apps run faster and more smoothly, without needing massive computing power from one source.

Mental Health in the Digital Age: Are We More Connected or More Alone?

We live in a time where our entire world fits in our pocket. With just a few taps, we can call a friend across the globe, post a photo for hundreds to see, or scroll through endless videos for a laugh—or a cry. The internet has brought us together like never before… but what has it done to our mental health?

Let us take a deep breath and dive into how the digital world is shaping our minds—for better and for worse.

Always Online, Always Watching

Think about it: When was the last time you went more than an hour without checking your phone? For most of us, our phones are practically glued to our hands. According to Statista, the average person spends over 6 hours a day online, and for teens, it is even more. Social media apps like TikTok, Instagram, and Snapchat have become digital hangout spots—but they also come with pressure.

Scrolling through perfect photos and curated videos can make people feel like they are falling behind, not attractive enough, or missing out on life. Psychologists call this “comparison culture,” and it is a fast track to anxiety and low self-esteem.

A study published in JAMA Psychiatry found that teens who use social media more than 3 hours a day are more likely to show signs of depression and anxiety. And it is not just teens—adults feel it too.

Doom scrolling: When Curiosity Turns to Stress

You have heard the phrase “just five more minutes,” right?

It is what we tell ourselves as we scroll through bad news, celebrity gossip, or sad stories late at night. That is called doom scrolling, and it has become a new habit in the digital age.

During the COVID-19 pandemic, doom scrolling skyrocketed. People wanted updates—but instead of calming us, it made us more worried, restless, and helpless. The University of Sussex found that doom scrolling is directly linked to higher stress levels and reduced sleep quality.

Digital Burnout: When the Brain Gets Tired

Ever felt mentally fried after hours of Zoom calls, endless WhatsApp chats, and bouncing between apps? That is digital fatigue—and it is real.

From students attending classes online to workers replying to emails at midnight, the lines between work, study, and rest have blurred. A report from Microsoft found that people working remotely experience more “digital exhaustion” because they never fully unplug.

Symptoms of digital burnout include:

  • Constant tiredness, even with sleep
  • Trouble focusing
  • Feeling overwhelmed by small tasks
  • Losing motivation

The Bright Side: Tech Can Also Heal

It is not all bad. In fact, technology is also helping millions manage their mental health—sometimes in life-changing ways.

  • Therapy apps like BetterHelp, Talkspace, and 7 Cups let people talk to licensed therapists from home. No travel, no waiting room.
  • Mindfulness apps like Headspace and Calm offer guided meditations, sleep sounds, and breathing exercises to help with anxiety.
  • Online peer support groups, including mental health forums and even TikTok creators, share personal experiences that help others feel understood and supported.

According to a 2021 report by Mental Health America, these tools especially help those in rural areas or people too nervous to go to therapy in person.

Finding Balance in a Busy Digital World

You do not need to delete your apps or go live in a forest (unless you want to). But we can take small steps to protect our minds while still enjoying digital life.

Here are things you can do:

  • Set time limits on social media apps. Try screen timers or take a digital detox day each week.
  • Follow accounts that make you feel inspired, not insecure.
  • Unplug before bed. Try reading, journaling, or stretching instead of scrolling.

Talk to someone. If you are feeling off, do not stay silent. Call a friend, message a support line, or see a therapist.

Final Thought: Real Connection Is not Wi-Fi

We were never meant to live through screens 24/7. While tech connects us globally, real emotional connection still happens in quiet conversations, genuine laughter, and eye contact—not emojis.

The digital world is here to stay, but we can choose how we interact with it. When we take care of our mental health—both online and offline—we show up stronger, calmer, and more human.

Because at the end of the day, it is not about being online all the time—it is about being well.

Explainer: What Is Quantum Technology—and How Can It Help Us Today?

You have probably heard the word “quantum” before—maybe in a movie, on the news, or from someone talking about the future of technology. Sounds like science fiction, right? Well,  it is actually true, and scientists are making it work. Quantum technology can be super confusing. But in reality, it is just a different way of doing things using the tiniest parts of nature.

So, What Does “Quantum” Even Mean?

At its core, quantum technology is built on the strange and mind-bending rules of quantum physics—the science of the tiniest things in the universe, like atoms and particles of light. These things do not follow the normal rules we are used to in our everyday lives. They act very strange.

For example:

  • A tiny particle can be in two places at once.
  • It can spin in two directions at the same time.
  • Two particles can be connected so tightly that if you change one, the other reacts instantly, even if it is far away.

Scientists call this the quantum world. And now, they are building technology based on these weird but real behaviors.

Quantum Technology

Scientists and engineers are learning how to use these strange rules to build new kinds of machines, especially computers, sensors, and communication systems. This new field is what we call quantum technology.

1. Super-Smart, Super-Fast Computers (Quantum Computers)

Normal computers—like your laptop or phone—use “bits” to process information. Each bit is either a 0 or a 1.

But quantum computers use qubits, which can be both 0 and 1 at the same time. This means they can do many things at once, making them crazy fast at solving certain problems. Imagine trying to find the fastest route through a million cities. A normal computer might take years. A quantum computer? It could figure it out in minutes. This could help:

  • Find new cures for diseases
  • Design better materials for building and energy
  • Predict weather and climate much more accurately

Companies like Google, IBM, and Microsoft are already working on this, and some small quantum computers are already real.

2. Unhackable Security (Quantum Communication)

Ever worry about your data being hacked? Quantum communication could help fix that.

It uses a trick where any time someone tries to eavesdrop, the message changes—and you know instantly that something’s wrong. That means hackers can not sneak in without being caught.

This means that

  • Your personal info and bank accounts could be safer
  • Governments and businesses could protect sensitive data better
  • It could stop cyber attacks before they even start

China already has a quantum satellite sending secure messages from space. Intriguing, right?

3. Super Sensitive Sensors (Quantum Sensors)

Quantum tech can also be used to make ultra-sensitive sensors. These are gadgets that can notice very, very tiny changes in their surroundings.

  • Doctors could use them for earlier, better scans (like spotting diseases sooner)
  • Builders could find underground pipes or minerals without digging
  • Scientists could detect earthquakes or volcano warnings earlier

It is like giving machines a superpower: the power to notice stuff we would never catch before.

How Is This Useful Now?

While full-scale quantum computers are still being developed, smaller parts of quantum tech are already being used:

  • Some banks are already testing quantum encryption to keep accounts safe
  • Pharma companies are using it to test new drug ideas
  • Governments are investing billions in military and space uses
  • In the UK, Australia, and Germany, labs are creating real-world tools based on quantum tech.

So even though it might sound futuristic, quantum technology is slowly slipping into our everyday world. 

In a Nutshell, Quantum technology is not just for scientists in labs. It is a powerful tool that is already starting to change the way we live, work, and protect our world. From faster computing to safer communication, its benefits touch everything from medicine to climate change to national security.

And while we are still in the early days, the possibilities are huge.

So next time you hear “quantum,” do not tune out. It might just be the future knocking at your door.

Trump Signs Executive Order : Bringing AI to K–12 Classrooms.

0

Earlier this week, President Donald Trump enacted an executive order that will introduce artificial intelligence (AI) into U.S. schools, beginning with kindergarten and extending through 12th grade. What is the objective? To ensure that young Americans are prepared for a future where AI will be present — in employment, in tech, and even in daily living. This move highlights the growing importance of the future of AI education in the United States.

The directive, titled “Advancing Artificial Intelligence Education for American Youth,” was signed on April 23, 2025, and it guarantees significant transformations for students, educators, and schools nationwide

What’s in the Executive Order?

First, President Trump is setting up a special AI Education Task Force — basically a team of experts from different government departments (like Education, Labor, and Energy).Their mission is to lead AI curriculum development and recommend the best strategies for teaching AI across grade levels.

Secondly, the directive states that educators must be prepared to comprehend and implement AI in their teaching environments. The Department of Education will allocate additional funds to assist teachers in understanding AI — including the technology itself as well as how to utilize AI tools to enhance lessons and better equip students for the future.

Third, the government intends to collaborate with technology firms and educational institutions. These collaborations will develop free materials — such as lesson plans, activities, and online courses — to assist schools in teaching AI subjects without needing to create everything independently.

Fourth, President Trump is launching a Presidential AI Challenge, a national competition where students and teachers can show off cool projects involving AI. Winners might get prizes, recognition, or even scholarships.

Finally, the order ties AI education to job training. The Department of Labor will work on building AI skills into apprenticeships and workforce programs, so that older students can smoothly move from school into high-paying, tech-focused careers.

Why Is Trump Doing This?

Trump and his team believe that AI signifies the future of the American economy — similar to how factories and computers once did. He cautions that if American students do not start learning AI soon, the country may fall behind nations like China and South Korea, where children are already taught robotics, coding, and AI in their educational systems.

“We must ensure our children can outwit the machines instead of being substituted by them,” Trump stated at the signing event

What Does It Mean for Schools?

In the short term, teachers will probably need a lot of help and resources. Many teachers today do not have experience with AI — it is a new and complicated subject. Training them will take time, money, and support.

Some schools, especially in wealthier areas, already use AI-powered tools like tutoring apps, lesson planners, and virtual science labs. But many schools in rural or low-income areas don’t have the same access. There is a risk that the AI push could widen the gap between rich and poor schools unless the government makes sure funding is fairly distributed.

Plus, there are important questions about ethics: How should kids be taught to use AI responsibly? How do we protect their privacy? Should AI ever replace human teachers? These are big discussions that will need careful thinking.

The Bigger Picture

At the end of the day, this executive order is about more than just new technology. It is about changing what students learn and how they learn it — preparing them for a world where AI will be everywhere.

Whether you love Trump or not, experts agree on one thing: learning about AI is no longer optional. It is becoming just as important as reading, writing, and math.

This executive order is the first big step toward making AI education a normal part of growing up in America.

Saying ‘please’ and ‘thank you’ to ChatGPT costs millions of dollars, CEO says

Politeness is deeply woven into human culture. From childhood, we were taught to say “please” when asking for something and “thank you” when receiving it — a simple ritual of respect and kindness. But in the age of artificial intelligence (AI), where humans are increasingly talking to machines, even this timeless courtesy carries an unexpected price tag.

In a surprising revelation, OpenAI CEO Sam Altman recently disclosed that these small, polite phrases — when directed at AI models like ChatGPT — are costing the company tens of millions of dollars each year. While that number might sound exaggerated at first, it highlights a broader and more pressing reality about the hidden energy and environmental costs of AI interactions, as well as the growing AI energy consumption challenge.

A Polite Word, A Heavy Bill

Answering a question on the platform X (formerly Twitter), Altman candidly explained that the extra words we include in prompts, like “please” and “thank you,” slightly increase the length and complexity of the task for the AI. This, in turn, requires more processing power — and when you multiply that tiny increment by millions of users engaging with ChatGPT every day, the cumulative impact becomes financially staggering. While each individual instance may seem harmless, it is like adding a drop of water to a bucket — eventually, the bucket overflows.

Altman’s comments were more than just an amusing narrative; they pointed toward the very real operational costs AI companies face. Training and running large language models require thousands of powerful graphics processing units (GPUs) that consume vast amounts of electricity. Even generating a single ChatGPT response can burn as much energy as powering a dozen LED light bulbs for an hour, further underscoring the AI energy consumption dilemma.

The Environmental Shadow

The costs are not only financial. There is an environmental price, too. Data centers — the sprawling, buzzing facilities that power AI — already account for roughly 2% of global electricity usage, a figure steadily climbing with the rise of AI technologies. Moreover, these centers use tremendous amounts of water to cool the hot servers, contributing to the environmental impact of AI.

In this sense, every little polite prompt adds a molecule to the AI carbon footprint and strains already stretched environmental resources. It paints a new picture: one where the simple act of being courteous to a machine, while emotionally uplifting, could be environmentally taxing.

The Emotional Trade-off

But should we really stop saying “please” and “thank you” to ChatGPT? Not necessarily. Altman himself noted that the expense, while significant, was “well spent.” Politeness in human-AI interactions is not a meaningless nicety; it is part of building trust and making conversations with AI feel more natural and human.

Polite language has also been shown to lead to better, more respectful AI outputs. According to AI ethics expert Dr. Lance B. Eliot, courteous prompts often guide AI models toward giving more professional and helpful responses. It is a bit like teaching a child — the way you ask often shapes the way they respond.

Plus, politeness toward AI can reflect broader societal values. A 2024 survey found that 67% of Americans regularly use polite phrases with chatbots, and many said they did so because “it is just the right thing to do.”

The Larger Conversation About AI and Sustainability

This quirky news story opens the door to a deeper conversation: how do we balance humanity’s desire for ethical, friendly AI interactions with the pressing need for sustainability?

As AI becomes more embedded in education, healthcare, customer service, and creative industries, millions — soon billions — of small daily interactions will quietly draw on vast environmental resources. The problem is not people being polite; the problem is the lack of scalable, energy-efficient AI systems that can reduce the environmental impact of AI.

Solutions like more efficient hardware, renewable energy-powered data centers, and smarter models that can process requests with lower energy demands will be critical for advancing sustainability in artificial intelligence. Otherwise, even our politeness could unwittingly contribute to the global climate crisis.

Final Thoughts: A New Etiquette for a New Era

In a way, the revelation that politeness costs millions is both humbling and inspiring. It reminds us that nothing in our digital lives is truly “free” — not even a simple “please.” But rather than discourage courtesy, it encourages mindfulness. Perhaps the future is not about being less polite to AI, but about demanding better, greener technologies that allow kindness — human kindness — to flourish without hidden consequences.

In the end, it is a story not just about AI, but about what kind of world we want to build — and the small but mighty role every “please” and “thank you” can play in it.

​The Future of Artificial Intelligence: Insights from a Tech Visionary

0

Artificial Intelligence (AI) is changing how we live—from the way we shop to how doctors diagnose illnesses highlighting the growing role of the future of artificial intelligence in healthcare. But what comes next? Some of the biggest names in tech have shared their predictions, and their ideas sound like something straight out of a sci-fi movie—only this time, it is all real.

Demis Hassabis and the Quest for AGI

Demis Hassabis, the head of Google DeepMind (one of the world’s top AI research labs), says that in the next 5 to 10 years, we might create something called AGI (Artificial General Intelligence). This is a type of AI that does not just answer questions—it can think, learn, and solve problems just like a person.

Hassabis believes AGI could help cure diseases, tackle climate change, and even transform how we learn in school. But he also warns that such powerful AI should be handled carefully, with rules and teamwork between countries to avoid any dangers. As experts continue to explore the possibilities of AGI, it’s critical to evaluate the benefits and risks of AGI to ensure safe, ethical development

Ray Kurzweil’s Vision of Human-AI Integration

Futurist Ray Kurzweil has an even wilder prediction. He says that by 2029, AI will be as smart as humans, and later on, we might even merge our brains with computers. Sounds crazy? He says we’ll be able to think faster, access the internet with our minds, and even upload memories His vision highlights a future where human-AI collaboration could enhance intelligence, creativity, and productivity.—like a real-life sci-fi movie.

Kurzweil also believes that with smarter machines doing more work, we could eventually live better lives with less stress, and maybe even stop working full-time altogether.

Bill Gates on AI’s Impact on Jobs

Bill Gates, co-founder of Microsoft, shares his thoughts on AI and the future of work, particularly its impact on daily tasks and job roles. He says that AI will take over boring or repetitive tasks, like paperwork, so that teachers and doctors can focus more on real people. Gates also thinks that in the future, we might even work fewer days a week and still have enough money to live well.

He’s hopeful that AI can make life easier, not harder—but only if we plan for it and make sure everyone benefits

What This Means for All of Us

These tech leaders believe that AI can help us live longer, smarter, and maybe even happier lives. But they also agree: we need to use it wisely, set rules, and include everyone in the conversation. This underscores the importance of responsible AI development to ensure technology benefits everyone. Whether you’re excited or nervous about AI, one thing is clear—big changes are coming, and it’s better to be ready.

Site logo

* Copyright © 2024 Insider Inc. All rights reserved.


Registration on or use of this site constitutes acceptance of our


Terms of services and Privacy Policy.