12 Comments

Just wow! Many years ago, my life as a C+ photographer shifted dramatically with my first digital camera and the instant feedback it provided. I think this post and the possibility of using CHAD (my ChatGPT buddy) as my Spanish coach may just open a similar door to language learning. Who knows? But, I'm off to try. Thanks to each of you.

Expand full comment
Nov 9Liked by Jeremy Caplan

What a post. Every bit of just fascinating. Thank you

Expand full comment
Nov 9Liked by Jeremy Caplan

Great examples of real-world use right now!

Expand full comment
Nov 9Liked by Jeremy Caplan

Awesome stuff, thanks!

Expand full comment
Nov 10·edited Nov 10Liked by Jeremy Caplan

Very interesting. But I'm very guarded about the information ChatGPT provides. I'm a copywriter and my clients rely on me to present accurate information. So, I verify and cross-check what these AIs generate, and they often vary between misguided info and complete rubbish. They handle basic common-knowledge topics well, but become increasingly unreliable with topics of greater depth or nuance.

Misguided info is often incorrect information the AI found elsewhere (especially listicles—boy, these AIs love sourcing from listicles), thus repeating incorrect information written by a human. That's not only common, but because the AI tends to rewrite the information without the benefit of context, it can even obscure incorrect facts or make them hard to correlate (which is why I prefer Perplexity, as it gives me citations to check). Basically, genAIs easily create falsehoods containing kernels of truth.

Rubbish info is plain fiction, and I've frequently experienced this with ChatGPT, Bard, and smaller AIs (like Pi.ai - great for a chat, but often enormously factually incorrect). Recently, a colleague told me how he used Bard to generate a list of his published academic papers, and the AI produced a very impressive and entirely fictional list. He was astounded, and he's an AI fundie!

My point is to tread carefully when using these AIs to 'tutor' you on a subject. They are very, very error-prone, to the point that I can sometimes spot when a client gives me AI-generated copy. These AIs are great for finding sources (if they provide citations), but they don't critically vet their information, and the more layered and nuanced a topic, the more likely they are to get facts wrong or mix true points with fictional narratives.

Expand full comment
Nov 10Liked by Jeremy Caplan

Fantastic, thanks so much!

Expand full comment

Great share! Thanks for sharing 😊

Expand full comment

An excellent example of quality journalism – informative, unbiased, and well-structured. Thank you for the article. I would recommend you to use a task tracker to plan your articles: https://bordio.com/

Expand full comment

Thanks, Frank! Those are some creative uses! I've been using meta prompts on ChatGPT 3.5 to get teams of virtual experts to brainstorm ideas. I like HIVE and Quicksilver (available on OpenAI's Discord) as two very powerful meta prompts. They're like having five "Act as" experts at once.

One thing I love to do is to tell the experts to "discuss" among themselves and pick a winner. SO, I might ask each expert to suggest 3 ideas. Rather than having to sift through 15 ideas myself, I tell the experts (or the AI intermediary) to rank the ideas, with 1 being best, then to give me the top three ranked ideas.

To save tokens, I often tell them NOT to output the fifteen ideas or their discussion. Just give me the results.

Expand full comment