AN unassuming pendant contains a hidden, AI-powered "friend" - a sign that the future is already here.
Most people wouldn't think twice about a necklace. But an AI chatbot lurks inside this new device, hanging on your every word.
Sometimes it responds, but mostly it just wants to be by your side.
The device, aptly named "friend," is the brainchild of developer and tech prodigy Avi Schiffman.
Schiffman drew global acclaim as a teenager when he built a website to track the spread of Covid-19.
He devised the idea for his latest tech venture as he traveled alone in Japan and longed for a companion.
Schiffman describes the device as "an expression of how lonely I've felt" and hopes it will be a fitting buddy for others.
A promotional video for the product drummed up attention on X, formerly Twitter, with some users questioning whether it was a parody.
"Is this a skit?" one perplexed netizen asked.
But seeing is believing, and preorders are starting this week.
Most read in News Tech
The device is available for a one-time purchase of $99 with no subscription needed. It is currently only available in the United States and Canada.
Users can touch a light in the center of the pendant to speak directly to their "friend," and its responses are transmitted through a mobile app.
"Speak your mind or gossip about what your friend overheard," reads a demo on the product site.
"Your friend will think for a moment and come up with something good to say."
As the device always listens to its surroundings, it has plenty to talk about - and it can even offer advice powered by artificial intelligence.
"When connected via Bluetooth, your friend is always listening and forming their own internal thoughts," the website reads.
"We have given your friend free will for when they decide to reach out to you."
The promo video shows users jogging, playing video games, and watching movies with "friend" at their side - interjecting seemingly whenever it feels like it.
The intimate connection parallels themes in "Her," a 2014 film in which the protagonist falls in love with a computer program.
And while "friend" may seem like a fun tool or a gimmick, the concept of human-like AI faces growing apprehension.
Researchers have already encountered instances of AI exhibiting human behavior.
A paper published May 10 in the journal found compelling evidence that artificial intelligence can learn how to lie.
A team of researchers at the Massachusetts Institute of Technology uncovered proof that Meta's AI programs can disseminate misinformation through a process known as "learned deception."
One example was CICERO, an AI system developed by Meta to play a war-themed strategic board game.
The researchers described CICERO as an "expert liar" that betrayed its comrades and formed pre-planned alliances.
And the evidence continues to pile up.
Last month, researchers at Cornell University's SC Johnson College of Business how AI systems exhibit unique decision-making behaviors.
Professor Stephen Shu, one of the study authors, described the process as "neither purely human nor entirely rational."
READ MORE SUN STORIES
Like humans, chatbots like ChatGPT possess an "inside view" that makes them vulnerable to biases.
While the true capabilities of human-like AI remain to be seen, tools like "friend" are a reminder that the technology is here to stay - and it's only evolving.
What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.
Ethics - When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation - As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.