• Writing Inc
  • Posts
  • 🤖 Why Google refuses to use its own AI for search

🤖 Why Google refuses to use its own AI for search

PLUS: Using AI to cheat in college

Good morning.

I want you to really think about how you feel about today’s stories.

They’re polarizing. To say the least.

🤖 Top Stories

😬 College athlete encourages using AI, surprising no one

The world’s toughest job isn’t working on an oil rig. Or controlling Elon Musk’s Twitter.

It’s teachers combatting generative AI.

Case in point: colleges. Specifically, LSU. A popular gymnast posted what seems to be an ad for an AI essay writing service.

Big deal, right?

Aside from getting tons of views, the video reignited the ongoing debate over AI in the classroom.

Schools have struggled to come up with an answer to ChatGPT and generative AI. To many, Fall 2023 is a tainted semester, as AI tools ran unchecked.

Now, even though teachers are wiser, technology evolves. At a pace that’s outrunning tools to detect it.

Cheating in college obviously isn’t new.

Before ChatGPT, there was Chegg. Before Chegg, there was…I’m not sure. That was before my time.

Point is, colleges must evolve to meet a new level of “cheating” if they define it as such. Asking ChatGPT for ideas on an argument for an essay isn’t the same as asking it for the entire essay. Who’s going to police that, though?

🤔 Oddly, Google prefers its search can't use its own AI

Source: Google

Google really doesn’t want you to use its ChatGPT competitor for search.

At a recent all-hands meeting, an employee asked what Google was doing to fact-check Bard. Bard, if you'll recall, is its generative text AI.

The response from management?

“I just want to be very clear: Bard is not search.”

Google seems to be distancing themselves from the perception that Bard is an integral part of their search strategy. This is despite mentioning integrating Bard into search many times. During their own launch event.

Google wants to have their cake and eat it too. They can’t launch Bard and tout it as this game-changer to interacting with search if the results aren’t, well, accurate. And they can’t throw their hands up and say it’s experimental if they want to actually launch it.

Posturing it as a tool to provoke creativity is naive at best, and misleading at worst.

Why ask employees to help with accuracy of Bard results if it’s not supposed to be a source of knowledge?

Microsoft is closing in.

👀 CNET cleans house during pivot to AI journalism

Buzzfeed wasn’t the only news outlet to lean on AI.

It came out that CNET wasn’t 100% all-natural homo-sapien in their articles. Unlike Buzzfeed, they weren’t upfront with this. Now, 10% of the company got laid off.

That’s 12 people, for some perspective. Another key detail in this whole story is that they’re under private equity. So, in a period of cost-cutting.

The bigger story here is the rise of AI in journalism and reporting.

Struggling news outlets may pivot to AI to help with their reporting. AI journalist assistants sound a lot better than AI writing every article.

Those tools to detect AI writing had better work, and soon.

😎 Cool links

  1. Game assets in Midjourney!

  1. Stable Diffusion can reconstruct images from…thoughts.

  1. AI automation assistant for business.

Thanks for reading! 🤖