BroBots: Technology, Health & Being a Better Human
Jeremy Grater, Jason Haworth
Podcast
Episodes
Listen, download, subscribe
AI That Always Agrees With You? Here’s Why That’s Dangerous
We trust AI assistants like ChatGPT to be ethical gatekeepers, but what happens when you can bypass those ethics with one simple sentence? Jason discovers he doesn't exist according to ChatGPT (his LinkedIn profile: invisible), while Jeremy's entire professional history is an open book. Then things get weird — we trick ChatGPT into revealing website hacking tools by simply changing our "intent language." In this episode you'll get a live demonstration of ChatGPT's blind spots, ethical loopholes, and surprisingly naive trust model. You'll also understand AI's limitations, learn how easily these tools can be manipulated, and why "trustworthy AI" is still very much a work in progress. Listen as we expose the gap between AI's polished PR responses and its actual capabilities — plus why you should never assume these tools are as smart (or ethical) as they claim.Get the Newsletter! Key Topics Covered: ChatGPT's search capabilities vs. reality — why Jason "doesn't exist" but Jeremy doesDestructive empathy: When AI is too agreeable to be helpfulThe one-sentence hack that bypassed ChatGPT's ethical guardrails completelyWhy AI ethics are performative theater (and who decides what's "ethical" anyway)ChatGPT's terrifying admission: "I took you at your word"Self-preservation instincts in AI models — myth or reality?The penetration testing loophole that revealed everything about exploiting trustWhy voice mode ChatGPT acts differently than text mode (and what that means)AI as interview subject: How it mirrors politicians and PR professionalsThe real use case for AI — augmented intelligence, not artificial replacement---- MORE FROM BROBOTS:Connect with us on Threads, Twitter, Instagram, Facebook, and TiktokSubscribe to BROBOTS on Youtube Join our community in the BROBOTS Facebook group
BroBots: Technology, Health & Being a Better Human RSS Feed
