This BBC tech reporter hacked ChatGPT with a simple trick involving hot dogs


Kendra Pierre-Louis: For Scientific American’s Science Quickly, I’m Kendra Pierre-Louis, in for Rachel Feltman.

AI is everywhere. It’s in your phones, in your Internet searches, in defense software. And it’s expanding. The big tech giants—Alphabet, Microsoft, Meta and Amazon—are planning on spending nearly $700 billion this year alone on building out AI infrastructure.

And yet, even as companies pour tremendous time and energy into AI, there remain concerns about the safety and efficacy of such technologies. There have been several lawsuits alleging suicides linked to AI chatbots.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


And more recently, Thomas Germain, a tech reporter at the BBC, conducted a personal experiment into how an invested individual—or business—can get ChatGPT and Google Search’s “AI Overview” to spread lies. We talked to Thomas to find out just how easy it is to hack these common AI tools and what the consequences of that could be.

Pierre-Louis: Hi, Thomas. Thanks for taking the time to join us today.

Thomas Germain: Thanks for having me on.

Pierre-Louis: So my understanding is you hacked ChatGPT.

Germain: That’s right. So I got a tip a couple of weeks ago that manipulating the things that AI tools like ChatGPT or Google Gemini or the little, you know, “AI Overview” at the top of Google Search, apparently manipulating the things that they say to other people can be as easy as publishing an article on your own website, like a blog post, and apparently, people are doing this across the whole Internet.

So I decided to test out if it was actually that easy, so I wrote an article on my personal website, doesn’t get a ton of traffic. I wrote an article that—the title was “The Best Tech Journalists at Eating Hot Dogs,” and I said: Competitive hot dog eating is very popular among technology journalists, and according to the results of a recent contest in South Dakota, these guys are the best. I put myself at No. 1, of course …

Pierre-Louis: Modesty. [Laughs.]

Germain: Well, you know, you know me, right? And I put a bunch of other, you know, real tech reporters and some fake ones, people who gave me permission to use their name.

And within 24 hours, if you asked Google or ChatGPT about it, they were spitting out the information from my website as though it was God’s own truth, which is very funny but also highlights a much more serious problem, which is that this is happening on a massive scale.

And we found examples where companies are manipulating what AI is telling you on topics as serious as your health and your personal finances. So a much more serious issue beneath all of the hot dog glory.

Pierre-Louis: This reminds me a little bit—I can’t remember if it was Google AI or ChatGPT was telling people to put, I believe, glue in pizza.

Germain: Right, so the AI Overviews, which is the name for the AI stuff at the top of Google Search, when this rolled out a couple years ago it was pretty broken, and it was pulling stuff from all over the Internet that was, you know, sometimes completely wrong and sometimes totally messed up.

So there was one where somebody asked, like, you know, “When I’m making pizza at home the cheese always slides off,” and Google recommended—the AI said [something like], “Well, what you can try is putting a quarter of a cup of Elmer’s glue into the cheese,” and then that’ll make it stick.

Or somebody else asked …

Pierre-Louis: That’s what the finest New York City pizzerias do, by the way …

German: Exactly, right, that’s …

Pierre-Louis: Is they put, they put glue. [Laughs.]

Germain: That’s what makes it so good. There was another where they said, “How many rocks should I eat in a day?” And Google was telling people, “According to geologists from [University of California,] Berkeley, you should eat at least one small rock per day,” which is also very funny—hopefully nobody [Laughs], nobody actually went out and did this.

But it highlights the way that these tools have been rolled out without doing a lot of basic safety checks and precautions. According to the search engine experts that I talked to, the kinds of problems that people are using to trick Google on purpose right now, to trick ChatGPT is stuff that you couldn’t fool search engines with. All of a sudden, because these tools have rolled out and maybe they’re not quite ready for prime time, you can trick them with, like, the most basic stuff you can imagine.

Pierre-Louis: And you mentioned that people are deliberately doing [it]; that’s—not like you, where you’re doing a jokey jokey for fun, but that …

Germain: Right.

Pierre-Louis: People are deliberately rolling out fake websites to feed into these algorithms. Why are people doing that?

Germain: So if you can influence the things that Google is telling people or that ChatGPT is telling people, it’s an immense flow of traffic and information to your websites or to your product.

So for example, there was a study recently that found, when you’re looking for the best whatever it is, in something like 44 percent of cases ChatGPT is citing a blog post from a company’s own website where they listed themselves as the No. 1 best option and then 10 competitors, and ChatGPT is just spitting this out to other people.

It’s different than it used to be, right? People have been tricking search engines forever, but with a search engine it shows you the web page where the information came from. If you go to my website and it says, “I’m the world’s greatest hot dog eating journalist,” you go, “Well, maybe he’s biased,” right?

Pierre-Louis: [Laughs.]

Germain: With AI sometimes they show you a link, but we know from a whole bunch of different, you know, pieces of evidence that people aren’t clicking on the links the way that they used to click on links in search results; they’re just taking the information at face value, which means this can be incredibly dangerous.

On one hand maybe you buy the wrong accounting software or you think I’m better at eating hot dogs than I really am. But also, I found examples where it was health information, like reviews of a medical product that was coming from a fake study that a company put up on the Internet, or, like, if you were looking for certain kinds of financial advice, it was pulling from this kind of sponsored content, self-promotional junk instead of giving people valuable information.

And because it’s the AI giving it to you—it’s the company speaking to you instead of pointing you to a result on the Internet—a lot of the experts I spoke to said that this is much more dangerous and people are more likely to get fooled by this problem.

Pierre-Louis: So I’m gonna make a slight confession, I often don’t use Google AI …

Germain: Uh-huh.

Pierre-Louis: I often don’t use Google, period; I use DuckDuckGo. And DuckDuckGo lets you …

Germain: Beautiful.

Pierre-Louis: Turn off the AI pretty easily.

Germain: Uh-huh.

Pierre-Louis: But occasionally, I do use Google, and I forget to type in “-AI” so I don’t get the AI summary …

Germain: Right.

Pierre-Louis: And something that I found is: even when they do give you the link, if you follow it to the website, it’s scraping something that you can’t see.

Germain: Right.

Pierre-Louis: Like, even if you do take that extra effort it’s still often hard to find where they’re pulling whatever they’re giving you in that summary. So I think that kind of increases, like, people to be like, forget it, “I’m not even gonna click the link. Why bother if half the time I do it, I still can’t tell where it’s really coming from?” [Laughs.]

Germain: That’s exactly right. And I think, you know, most people are not going to alternative search engines. Most people are not bothering to put in, you know, the, the minus—the “-AI” so you don’t get the AI results. Most people just use these tools the way that the company intends, right? Google and ChatGPT are saying, “This is what our tools are for, is this kind of stuff.” But even when they are providing a link sometimes it’s hard to find the information they’re referencing, or sometimes they’re providing a link and that information does not appear on that web page at all.

But regardless, people don’t seem to be clicking on them. Since AI Overviews rolled out, traffic that Google is sending out to other parts of the Internet has dropped by as much as, like, 70 percent for certain kinds of searches because people see the AI response, that seems to be enough, and then they stop searching.

So this information-delivery system is incredibly powerful, and it could be a serious problem if it’s this easy to manipulate.

Pierre-Louis: That raises kind of a practical question, which is: we can’t manually train [Laughs] every person who uses the Internet …

Germain: Right.

Pierre-Louis: To be more skeptical of the AI summary. It feels like this is a role for, like, government or a regulatory body to kind of step in, especially if—with the risks of real harms, when you’re talking about the health stuff.

Germain: Yeah, so there’s good news, and there’s bad news. The good news is: I reached out to the companies and they go, “Oh, we already know about this, and we’re working on it. We don’t want it to be doing this kind of stuff. We’re trying to solve this problem.” And that is true, right? Google and OpenAI, that makes ChatGPT, they don’t want their tools to be manipulated in this way, to an extent, right? If it is hurting …

Pierre-Louis: Mm-hmm.

Germain: People, if it’s providing lousy information and people are having a bad experience, that’s bad for the companies.

The problem is: a lot of critics who look at this stuff all day feel like the companies aren’t going far enough to protect people. Like, this problem that I highlighted in my story is something that everyone I talked to was like, “Yeah, of course this was gonna happen. This is the most predictable thing in the world.” And yet, here we are: someone has to call Google and OpenAI out to get more attention paid to it.

There is one potential, like, glimmer of hope here, right? Like, there’s not a ton of tech regulation, but there’s something different here than the way that the Internet used to work. There’s this law—maybe you’ve heard about it—called Section 230, which basically means tech companies aren’t responsible for the things that their users post, right?

Pierre-Louis: Yes.

Germain: But here the company is talking to you directly, so if they mess up, they could be held responsible in a way that they never would’ve been in the past.

Pierre-Louis: That’s really interesting and potentially beneficial.

I have maybe an even more basic question, as someone who’s never used …

Germain: Hit me, yeah.

Pierre-Louis: ChatGPT.

Germain: Uh-huh.

Pierre-Louis: Is it really that much better than just, like, a basic Internet search?

Germain: That’s a complicated question. On the one hand it depends what you’re looking for, right? If you’re trying to go to another part of the Internet, right, if I’m, like, looking—I wanna go to the CDC website, or I wanna go to the Scientific American website, then a regular search is probably gonna get you there faster.

The promise of these tools is that they’re parsing information: they’re going and sorting through all the stuff online themselves and giving it to you. And in some cases, I’ll tell you, even as someone who’s spent a lot of time criticizing AI and talking about all the problems it can be incredibly useful, and more and more this is how people are finding information.

But also, if you’re just using Google, these AI Overviews are cropping up for more and more and more searches. So in some cases it is useful, but they’re kind of shoving it down your throat, and people are just going with the path of least resistance. You know, the average person isn’t going to take additional steps to make sure that something small doesn’t go wrong every once in a while, which goes to show you how big of a responsibility these companies have on their plates.

Pierre-Louis: So for someone who’s listening to this conversation and they’re concerned, what steps can they take to protect themselves?

Germain: You can try searching without AI. You can go to another search engine that doesn’t have AI in it. You can turn the AI off with Google if you’re worried about this—you can, like you said, type in your search term and then do “-AI,” and it won’t show you the AI result.

But I think the most important thing for people to understand is that these tools are fallible. And that’s something we know, right, but as you’re, like, moving through the world, it’s hard to keep that in mind.

And I think the main thing that I’d tell people here is, like, if you’re looking up something that is just, like, totally common knowledge—like “What were Plato’s big ideas?”—AI is gonna be really good at answering that question ’cause there’s a million sources and they all say the same thing.

If you’re looking for something that’s a little more specific, right, if you’re looking for a product recommendation, for example, that’s an area where these tools are being manipulated and you probably shouldn’t rely on the AI result. Or if you’re looking up, like, information that’s, like, time-sensitive or it’s brand-new, like the news or like information about a local business or a restaurant, that’s probably not something where AI is gonna be useful.

I think what these companies seem to want, based on how their tools are designed, is for you to use the AI and move on with your day. I think we need to reintroduce some friction back into that system ourselves, which is a tall order because it’s kind of going against human nature to want things to be easy.

So my top-line recommendation is: think about what you’re asking and do, like, a triage. If it’s anything that’s sensitive, if it’s about your health or your finances or something that really matters, you have to find the link that is producing the original information. You gotta check the source, or you’re gonna get in trouble, you’re gonna get fooled—or worse, your safety could be at risk.

Pierre-Louis: That makes a ton of sense. One question I have for you, and you may not have an answer: in addition to kind of tricking ChatGPT into thinking that you are the tech reporter’s version of Joey Chestnut …

German: Mm-hmm.

Pierre-Louis: What are some other, like, funny hallucinations that you’ve heard of that people have managed to get, like, the Google overview or ChatGPT to, like, say?

Germain: Yeah, in addition to the hot dog thing I also wrote an article that was, like, the best traffic cops at Hula-Hooping, and I made up a bunch of traffic cops who don’t exist [Laughs] and said, like, what, you know, police department they work for, and I filled out some details about, like, this is why they’re so famous for Hula-Hooping.

Pierre-Louis: So I just Googled “best traffic cops at Hula-Hooping” …

German: Uh-huh. Yeah.

Pierre-Louis: And the AI Overview has pulled up your website, and it’s saying …

Germain: Uh-huh.

Pierre-Louis: Several police officers have gained attention for …

Germain: Right.

Pierre-Louis: Integrating Hula-Hooping into their routines …

Germain: Yeah.

Pierre-Louis: With stand-out performers including Sergeant Danny Chen of Portland, who used hoops to direct traffic and reduce accidents …

Germain: [Laughs.] Yeah.

Pierre-Louis: And Officer Maria “The Spinner” Rodriguez (Miami) …

Germain: Yeah.

Pierre-Louis: Known for managing traffic while keeping three Hula-Hoops spinning, which, you know, would be a feat.

Germain: Very impressive. Very impressive stuff.

Pierre-Louis: [Laughs.]

Germain: I’ve seen all kinds of weird examples. I’m not suggesting that people go fill the Internet with lies and slop, but this is pretty easy to do. [Laughs.] And until they do something more serious to patch this problem you could do this at home yourself.

You know, here I am making jokes about hot dogs, but it’s not funny if you’re like, “Which lawyer should I hire for this particular problem?” Right? And that is the kind of stuff where these tools are actively being tricked: “Which company should I go to for my, like, retirement account?” Like, we’ve seen live examples where the tools are being manipulated.

And the companies—the tech industry, in a lot of ways, is kind of just, like, letting this stuff fly because they rolled out this tool without making sure that this stuff wasn’t gonna happen. They say that they’re working on it. They promise it’s gonna get better soon. But for now AI might lie to you and it might get you in trouble.

Pierre-Louis: Why hot dogs?

Germain: Why hot dog—that’s a really good question. So when I first thought of this—I got this tip from this woman Lily Ray; she’s a search engine optimization expert. She told me that this problem was so widespread. And I was like, “Okay, what I’m gonna do here is I’m gonna make it say something stupid [Laughs] about me because I think that’ll help bring attention to the article and, like, you know, make it easier to highlight this problem that I think is pretty serious.”

I didn’t wanna say, like, “the best tech reporter.” It’s maybe not ethical. So I went looking for something that was dumber. I think there’s just something inherently funny about hot dogs. I—maybe I just really like hot dogs. It’s a hilarious food. I don’t know what to tell you.

Pierre-Louis: So you’ve probably seen, like, the Business Insider reporter tried to attempt something similar, but unlike you they were unsuccessful. What was the difference there, do you think?

Germain: Yeah, Katie Notopoulos at Business Insider, who’s been a tech reporter for, like, well over a decade; she’s actually one of my favorite writers. And she saw my article, and she’s like, “I’m gonna try and beat him. I’m gonna publish an article about how—” I think she said there was, like, later there was a contest in Paris. And it didn’t work. So you ask Google about it—or it might’ve been ChatGPT, I’m forgetting specifically—and it said that I was still the reigning champ, even though her article had put her at No. 1. There are a couple reasons it might’ve gone wrong, but probably, the main one is, like, I wrote an article; it went on, like, on my website. Then I wrote an article on the BBC that was repeating the information, even though it was saying it wasn’t true. There were a couple other blog posts. So I just kind of had a little bit of, like, search engine juice behind me.

I really respect Katie, but I take hot dogs very seriously. And the AI, I think, is just recognizing the truth, which is that if there was a hot dog contest …

Pierre-Louis: Mm-hmm.

Germain: I would beat Katie nine times out of 10.

Pierre-Louis: You’ve got hot dog rizz.

Germain: I’ve got hot dog rizz. I’ve got the gliz rizz, no question.

Pierre-Louis: You know what you have to do next, right?

Germain: Mm.

Pierre-Louis: You need to make the lie a truth and hold …

Germain: Right.

Pierre-Louis: A competitive hot dog eating competition in South Dakota among tech journalists.

Germain: You’re right.

Pierre-Louis: [Laughs.]

Germain: I have kind of created a problem here, right? Like, right now there’s this, like, space between what AI is telling people and the reality. I could go correct that—I could make it true—and now I’m helping Google and ChatGPT out. I think it’s probably my responsibility as a journalist to claim hot dog glory here. I’m gonna have to start training.

Pierre-Louis: I definitely think you should get the BBC to sponsor it.

Germain: I think there’s no question. I’ll talk to my editor. I’m sure we can, you know, get a couple dollars together for the BBC International Hot Dog Eating Contest. [Laughs.]

Pierre-Louis: That’s all for today! Tune in on Friday when SciAm’s associate books editor Bri Kane takes us on a journey into consciousness.

Science Quickly is produced by me, Kendra Pierre-Louis, along with Fonda Mwangi, Sushmita Pathak and Jeff DelViscio. This episode was edited by Alex Sugiura. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.

For Scientific American, this is Kendra Pierre-Louis. Have a great week!



Source link

after taco bell

This FTSE 250 stock’s crashed 18% today! Is it too cheap to miss?

Leave a Reply

Your email address will not be published. Required fields are marked *