• Writing Inc
  • Posts
  • 📈 Here’s how you can tell if something is AI-generated

📈 Here’s how you can tell if something is AI-generated

Happy Tuesday. Lots to learn today. And if I seem a little out of place, well, I am.

Happy Tuesday.

If I seem a little out of place, well, I am. My living room flooded today. Spent the whole day asking ChatGPT if it could help me remove drywall.

That WON’T stop us from delivering some smashing AI content.

Specifically…

  • Deep dive: How to spot AI-generated content. You’re going to see more and more of it if you use the internet.

  • Why Apple’s keynote shows how much AI impacts your tech life

  • What the biggest AI company is worrying about

Ready? Good. Let’s begin.

Deep dive

How to sniff out AI-generated content

Let’s face it; you can’t trust the internet anymore.

Well, you never could. But it used to take more than a grainy photo of the Loch Ness monster to fool us.

No longer.

It’s not just fake photos of Trump almost getting arrested going viral. It’s AI-generated text. How would you like to be broken up with by ChatGPT? No one wants that. I don’t want that. Do you want that? No.

So let’s see how we can avoid being fooled.

We’ll start with the easier of the two; photos.

Here are a few ways to tell if a machine made that weird photo.

Look at the details.

1/ Check the background. AI has a problem with the small details in backgrounds. It’s really just filling in what is generally supposed to be there. Take a city scape for example. See cars about to collide into each other on the street in the background? See mismatched floors on office buildings? Those are warning signs. Do the faces on people get really, really creepy the farther away you get from the front of the photo? Dead giveaway.

2/ Text. text text text. Always check the text. Here’s what happens when I put “Waffle House sign” into Midjourney. Try to say it five times fast.

3/ Hands. AI imagery still has a problem with hands. Midjourney claimed to fix this, but hands still look pretty…off. Every digit looks like a thumb. Or every digit looks like a finger.

4/ Teeth. Unsettling, but a good sign. Nobody’s has 26 identical teeth. In AI photos they absolutely do.

The closer you look, the more blended duckies you see.

5/ Objects blending into each other. Headphones that magically turn into the arm on glasses. Hands that meld into a handheld microphone.

6/ Textures. If a texture looks smooth, or too perfect, or odd, then it’s probably AI-generated.

7/ Comments. Others are usually quick to spot AI-generated images too. Lots of “fake” and “AI” and “what’s the prompt” comments from others are usually a dead giveaway.

8/ Common sense items that are off or missing. Legs that are long. Arms that are missing. You get the idea. And Terry Crews isn’t the president.

In short, stop and think. Make your parents proud. Don’t believe everything you read on the internet.

Let’s move on to AI text.

This is way trickier. There aren’t (yet) watermarks that show up on AI-generated text.

So, what ways can we try?

Okay, you’re gonna laugh at the first one.

  1. If anywhere in the text it says “As a large scale AI model…” It's fake. You’d be surprised how many people don’t realize this, even when they’re copy-pasting their homework. control+f “language model” or “AI model” should do the trick.

  2. Tools like GPT zero and OpenAI’s tool. If you’re checking any text to see if it passes the sniff test, use it on these. I know they’re sometimes wrong. You’ll still want to check, because most people won’t know they’re sometimes wrong.

  3. Common sense. I’ll explain.

First, a test.

Here’s an article about Coursera’s co-founder saying AI will not destroy humanity.

And here are two summaries - one written by me, the other by an AI trying to copy me.

Which is which?

Option A

Title: Coursera co-founder doesn’t understand why everyone thinks AI will story us all

Body: “I have to admit, I don’t get it,” said Andrew Ng. Rare words from the co-founder of the learning platform Coursera.

“I’m struggling to see how AI could pose any meaningful risk for our extinction.

Ng took to Twitter this week to calm the anxious crowd of techies who fear AI will bring about the end days.

Color him skeptical: “I don’t see how it can lead to human extinction” he continued. He highlighted, let’s say, “easier” risks like bias, fairness, and accuracy, rather than, you know, Terminator 2: Judgement day.

Now, not to be that guy, but he did just launch a course (with OpenAI) on learning about AI. So…might be biased. Still! Might be right. We hope.

Option B

Title: Coursera Co-founder Questions AI's Existential Threat

Body: Andrew Ng, Coursera co-founder, expressed skepticism about the concerns over AI leading to human extinction. Ng took to Twitter to address the anxieties of tech enthusiasts who fear the worst-case scenario.

"I don't get it," Ng admitted, perplexed by the notion of AI posing a significant risk to humanity. He acknowledged other challenges like bias, fairness, and accuracy, downplaying the idea of AI-induced doomsday scenarios.

Critics argue that Ng's recent collaboration with OpenAI might influence his perspective. However, his extensive involvement in AI projects and expertise warrant careful consideration.

Ng invited his audience to share their thoughts on whether AI truly threatens human existence, indicating his commitment to fostering an open discussion on the topic.

The rise in AI's popularity, driven by OpenAI's ChatGPT release, has generated both fascination and unease regarding its potential negative impact. Ng's contrarian viewpoint adds an intriguing dimension to the ongoing dialogue, prompting a reevaluation of assumptions surrounding AI.

Spoiler

I wrote the first one. AI did the second.

Let’s see if GPTZero, the most popular tool to detect AI text, can sniff it out.

ChatGPT’s:

Mine:

Alright, so does this mean ChatGPT is perfect at avoiding detectors? Not quite.

Also, see this.

Odd way to find out we live in a simulation.

You can also just ask ChatGPT to rewrite the text to avoid AI detectors, and it sometimes works.

Oh. Remember how I talked about common sense?

Take a look at this. See how…not real it seems?

I’d be skeptical of any account online that talks like this. From now on.

Think I missed something? Let me know.

AI news that should interest you

1/ Apple picks features over flashiness when it comes to AI

Unless you’re living under a tech rock (and if so, we’re jealous) you probably noticed some big news. Apple blew it all up yesterday by announcing an augmented reality headset.

It’s everything you’d expect an Apple device to be. Prohibitively expensive, outrageously premium, and completely different from anything anyone else has made.

What was missing from the keynote was anything AI-related.

At first glance.

Apple slipped little easter eggs throughout the whole thing that hint at their AI capabilities.

Let’s take a look.

  • Upgrades to Airpods Pro mean AI handles turning off the noise cancellation when you start talking to someone. Apple didn’t even use the word AI here. It’s still cool as heck.

  • A new machine learning model for autocorrect means your ducking words will actually be what you wanted them to be. It’s also on-device, so no sending that data to process elsewhere.

  • Improved machine learning to identify your dog in photos, and gather all the photos of your dog. They should have led with this. It’s the most important thing they’ve ever released.

My point? Advancements in AI don’t always need to be flashy. Guarantee you’ll use these more than ChatGPT. Unless you’re a weird nerd like me.

2/ AI might be all hype, at least for now.

When the biggest CEO in AI wants an interview removed from the internet, we’ve got two thoughts.

  • Have they seriously never heard of the Streisand effect?

  • What’s the juicy goss they wanted scrubbed away?

We’ve got answers on that second one.

OpenAI has three wishes.

  • More GPUs. They need more hardware to run AI models on, and they need it yesterday.

  • Better context and memory for ChatGPT. They want you to stop having to give ChatGPT the same context over and over. Perhaps some form of memory?

  • A better use case for plugins. Turns out companies want ChatGPT in their products. They don’t want their products in ChatGPT. So, pump the brakes on ChatGPT plugins being the “AI App Store”.

What does this all mean? They’ve got something, but they’ve got a ways to go.

We’re still bullish on AI. It would be weird if we weren’t tbh.

AI tools, tips, links

  1. Use ChatGPT to help with your SEO. Link

  2. Official OpenAI guides for prompt engineering. Learn ChatGPT straight from the source. Link

  3. ChatGPT plus Billy Bass to help with job interviews. I can’t believe I said those words in that order. Link

  4. Test your AI-generated content on real consumers. Link

  5. Remember what I was saying about AI bot replies on the internet? Link

  6. Using AI to fill in the rest of memes. Link

Poor Apple.

In case you missed it...

What’d you think?

Got feedback? I’d love to hear it. Hit me up on LinkedIn or Twitter.