TCC Podcast #423: Copy, Originality and A.I. with Jon Gillham - The Copywriter Club
TCC Podcast #423: Copy, Originality and A.I. with Jon Gillham

For the 423rd episode of The Copywriter Club Podcast, we’re checking in on the progress A.I. has made over the past year with Jon Gillham, founder of Originality.AI. We talked about how originality helps protect writers from false accusations of plagiarism and checks facts (unlike ChatGPT and Gemini), plus some of the risks that A.I. poses to the world of content creation. Click the play button below, or scroll down for a full transcript.

Stuff to check out:

Get the AI Bullet Writing Prompt
Originality.AI
The Copywriter Club Facebook Group
The Copywriter Underground

 

Full Transcript:

Rob Marsh:  Almost two years ago, we realized that A.I. was not just a new idea that copywriters and content writers needed to pay attention to, rather it was a game-changing technology that would impact almost everything writers do. The number of new tools and features that include use A.I. to deliver their benefits is in the thousands. That’s a big part of why we launched the A.I. for Creative Entrepreneurs Podcast last year. You can find more than 20 conversations about A.I. on that podcast.

But as A.I. has become almost commonplace, we stepped away from doing so many interviews about artificial intelligence and just how it is changing our industry. But I’m thinking it’s about time we checked in on how the tech has changed over the past few months and what copywriters should be using it for… if they aren’t already doing it.

Hi I’m Rob Marsh and on today’s episode of The Copywriter Club Podcast, my guest is Jon Gillham, the founder of Originality.AI. This tool is the most accurate A.I. detector available today. What’s more in addition to checking for content created by A.I., it’s a fact checker—something tools like Gemini and ChatGPT have struggled with, it checks for plagiarism, and will help protect you against clients and others who might claim your writing isn’t original. We talked about how they do it and the risks A.I. continues to pose for writers on this episode, so stay tuned.

Before we get to that… last summer we ran the last ever live cohort of The Copywriter Accelerator program. Since then, the only way to get the business building insights and strategies that we shared with more than 350 copywriters over the past seven years was to join the Fast Track version of the accelerator at thecopywriterclub.com/fasttrack. But I’ve been working on an updated version of that program and it too will go away soon. So if you’ve been thinking of joining the accelerator, time is running out. What’s coming next? It’s too soon to reveal what I’ve been working on, but if you join the accelerator fast track before we launch it, you’ll get early access to the new program, absolutely free. Until then, you get all of the content, the 8 modules and blueprints and several bonuses that are included in The Accelerator Fast Track. And when we launch the new program sometime next year, you’ll get that updated program too. Don’t wait to work on your business so when the new year is here you have a steady flow of clients and a signature service you’re proud to offer them. Visit thecopywriterclub.com/fasttrack to learn more today.

And now, let’s go to our interview with Jon Gillham.

Hey, John, welcome to The Copywriter Club Podcast. We’d like to start with your story. So how did you become the founder of Originality AI, and I guess also the co-founder of AdBank and Motion Invest and Content Refined? You’ve done a lot of this company starting thing.

Jon Gillham: Yeah, it’s been a journey. Yeah, so my background was as a mechanical engineer, did that in school, and then I always knew that I wanted to get back to my hometown and started some sort of online projects. A lot of those projects all had sort of a central theme around creating content that would rank in Google, get traffic, and monetize that, whether that was an e-commerce site, a software business. And then at one point, we built up some extra capacity within the team that I had of writers that we were working with, and then started selling at extra capacity. So built up a content marketing agency, sold it, and then had seen the wave of generative AI coming. look to build a solution to try and help provide transparency between writers and agencies and, and clients. And that’s where originality came from.

Rob Marsh: So as far as most people’s experience with AI, it really started about two years ago when, you know, ChatGPT went live and suddenly everybody was like, oh my gosh, this is not what we were expecting, or it’s come along a lot faster. But you’ve been doing this a lot longer than that. Tell us, you know, basically, how did you get interested in AI and get started with creating these kinds of tools?

Jon Gillham: Yeah, so I totally agree. I think a lot of people sort of assume everything on the internet that predated Chat GPT was human generated. But the reality is that there was other tools that predated Chat GPT. Specifically, there’s the GPT-3 that got released by OpenAI in 2020, and then sort of from GPT-2 2020, and then From that, there were many writing tools that were built off the back of it, so tools like jasper.ai. And we were, at one point, one of the heaviest users of Jasper, where we had a writing service where we transparently used AI content, but stalled that content for a lot less than the human-generated content in another part of the content agency. And so that was where we really started to see that the efficiency lift that came from using AI and then, you know, who who gets to capture that efficiency if is it, you know, the writer that copies and pastes out of chat to BT that then displaces a writer that did hard work on their own. And that was sort of where, where we first started playing with AI. And then yeah, using it extensively within our content marketing agency.

Rob Marsh: So before we go really deep on AI and the stuff that you’ve done, I’m interested, as a founder, as a co-founder, just what are some of the biggest challenges that you have faced as you’ve started your businesses? Again, we’re talking to an audience of people who are running their own businesses, most of them. So I’m just curious how you’ve been able to succeed where so many others tend to fail.

Jon Gillham: I mean, there’s certainly failures in there. So they’re not all successes. So I think the common theme is when we’re solving it, the common themes on when there’s success is probably two core things. One, that resolving a problem that is meaningful and adding sort of significant value by helping to solve whatever that problem is, is one. And the second piece is when there’s been a really good team around that project, when the co-founders on it are great, when the initial hires are really, really good. Those are probably the two key things that have seemingly been the common traits when the projects have gone well, and there’s certainly projects that haven’t gone well, lots of failures in there as well.

Rob Marsh: Interesting you say that. I worked at a startup a decade or two ago. the CEO that came in to run it. It was a fun environment, really great place to work. We had a successful exit, sold off to HP. And I remember the CEO saying, if you’re lucky, you get to have an experience like this sometime in your career where you put together a great team, you’ve got a great product, you have this great experience. And then he said, and then you spend the rest of your life trying to replicate that at the next company or the company after. And there’s a lot of truth to that.

Jon Gillham: There’s a lot of truth. In a lot of our weekly meetings at the All Hands right now, we’re saying, like, you know, these are currently the good old days. So, like, enjoy them because we’re going to be looking back at this, like, hopefully we will be fortunate to be lucky enough to be looking back at these days as the good old days, because it is a lot of fun right now. And I think, yeah, I certainly echo what he was saying in terms of, yeah, it doesn’t, a lot of things need to go right to line up with sort of a, all the pieces to be in that sort of like a scaling stage of a company.

Rob Marsh: Okay, so let’s talk about Originality AI and this tool that you have built. Basically, my understanding of it is, you know, as I’ve scanned through and checked it out, it does a few different things. You know, checking to see if there’s plagiarism, if some content was written by AI, some additional things as well. To me, this seems incredibly useful for a couple of different audiences. One, I teach a college class at one of the colleges here. I’m always using AI checkers. As I see submissions coming in from students, I’m like, that’s suspicious. Let’s run it through the checker. But obviously businesses hiring content writers, copywriters want to see that their stuff’s original. Problem is sometimes the checkers don’t work the way they’re supposed to. So tell us about originality AI and the problems that you’ve been solving with it.

Jon Gillham: Yeah, so the problem we started out to solve and being from the world that we were in within content marketing is a content, a final step in the content quality check. So kind of a final QA, QC on a piece of content. And so historically that might mean readability, readability check, plagiarism check. Okay, we’re good to go to publish it. Now that means Checking for if it’s been generated by AI or not, and we’ll get into some of the challenges around that. Plagiarized, if it is or isn’t. I mean, no one plagiarizes anymore when you can just get AI to write it for you. And then fact checking. So we have a fact checker built in because that’s sort of a new an increase in heightened sensitivity around fact checking with the prevalence of generative AI content and hallucinations. And then some of the standard readability checks, grammar, and spelling checks. And so we aim to be that complete content quality QA QC step so that somebody can be really confident. We say hit publish with integrity, where they can take a piece of content, make sure it meets all the requirements, and then hits publish. Some of the challenges we talked about, AI detectors are highly accurate, but not perfect. And so the same way that the weather is meant to use AI and it gets it right a lot of the time, but also gets it wrong to some extent. AI detectors are similar, where they’re a classifier that aims to try and predict whether or not it thinks this piece of content is AI generated or human generated, and then it makes its best prediction, gets it right, calls AI, in our case, if it’s just sort of a straight chat GPT output 99% of the time, but it will get, it’ll call human content AI one to 3% of the time, which works in certain settings, doesn’t work in other settings, academia being one where really it’s impossible to apply sort of an academic disciplinary action with a false positive rate above 0%.

Rob Marsh: Yeah. It strikes me too that there’s certainly, because of the way AI is trained on human writing, at least originally now, I think there’s more AI training data in the actual database, but the way it was trained, there’ve got to be one to 3% of humans that write the way AI is right anyway. They’re boring writers, or they have the cadence that we tend to see get picked up, or they use those same cliches that we tend to see a lot of. Yeah. It makes a lot of sense to me that those writers are going to come up as AI because well, AI has been trained to look for this stuff.

Jon Gillham: Yeah. I mean, so it really produces a very, by definition, it’s like all this data has gone into it all in this massive training set. And then it ends up producing, I mean, you can ask it to produce this sort of range of outcomes of like, hey, write write a Nobel Peace Prize acceptance speech in the style of Dave Chappelle, right? That’s gonna be a pretty unique piece of content that doesn’t look like typical AI content. But there’s certainly some ticks to AI content that we feel like we can pick up on. But yeah, there are definitely some people that have a style that is very similar to sort of the base style of most LLMs. And it can be extra frustrating for them because they end up getting false positives at a higher rate than somebody else might.

Rob Marsh: Yeah. So are you saying that, you know, if I have AI write something and I try to spice it up by saying, you know, write like Dave Chappelle or, you know, make this humorous or silly or something like that, originality is still can pick that up at 99 to 100%. It can still tell that it’s written by an AI.

Jon Gillham: Yeah, so that’s the big difference between a sort of a human’s ability to detect AI and like an AI’s ability to detect AI. Human’s ability, we can get fooled very easily. We have sort of a couple cognitive biases that are working against us. We have an overconfidence bias, and then we have a pattern recognition bias. So if you ask a room who’s an above average driver, 80% of the room puts up their puts up their hand. And, you know, the stock market and casinos are sort of built off of this sort of humans capability to think that they see patterns when they actually don’t. And so in all studies, the sort of humans ability to detect AI is like 50 to 60% accurate. And it gets worse when you apply these sort of prompts that make the content more unique than this than the straight sort of like, recognizable chat GPT kind of content. Whereas AI detectors are picking up a lot more signals than what humans are capable of identifying. And its accuracy stays very high in 99% for even the most sort of challenging prompts for a human to try and identify.

Rob Marsh: So how do you solve that problem? What does your tech do that’s not being used by everybody else?

Jon Gillham: Yeah, so other detectors have in all benchmarks were the most accurate, but there’s other detectors that are close. The sort of unsettling thing in whatever AI system exists in the world, it faces some of the same challenges, where if you ask Chachapiti or the makers of Chachapiti and say, why did it respond like that? They struggle to answer. They can talk about the training data, they can talk about the training method, but they can’t say why it responded like that. And in a similar way, our detector is picking up on patterns that we don’t No. We understand how we trained it. We understand the efficacy test that we put it through. We understand the benchmark test that we put it through. But we can’t say this piece of content was identified as AI for these reasons. And I wish we could, but that’s just not how AI works.

Rob Marsh: Yeah, so this is part of the black box trouble that leads some of us to think that maybe AI is doing stuff in the background that we’re not even aware of and will someday take over the world.

Jon Gillham: Exactly. It is an unsettling experience to create something and not understand exactly how it works.

Rob Marsh: Are there other challenges around then AI generated content and identifying it that we haven’t chatted through or hit on?

Jon Gillham: I think some other challenges related to AI content is around, I think, a lot of a lot of editors used to sort of use the quality of the content as a tell on whether or not they needed to go deeper on fact checking, usually sort of like factually accurate information was also well written information. And what the sort of the challenge that generative AI is produced is that that that sort of trigger for this, this does not feel like a very well researched topic. and article is no longer the same problem. Whereas now, really, really well-written, grammatically correct written AI-generated content, it can also be very factually wrong through hallucinations and having just made stuff up, but convincingly so. And so I think that the capability to to the level of intensity that needs to be applied down to fact-checking of all content because where generative AI has sort of poisoned the content is, it becomes harder to understand in today’s sort of environment with generative AI.

Rob Marsh: Yeah. So some examples of that might be if you’re, well, you could be writing, say a paper for school or something where you’re saying, Hey, give me 10 sources for this particular kind of an idea or a scientific study or something like that. Or if you’re writing content for a client, you might be looking for, you know, five real life examples of this particular marketing thing that happened, you know, that, and then the LLM will just hallucinate two of the five. It’ll just make it up. Sounds real, but they’re not. So how do you guys fix that? Because it seems like you’re using an LLM that’s making stuff up. How do you make sure that it can tell that it’s making stuff up?

Jon Gillham: Yeah, so we… AI can… There are very few settings, very few times where an AI or an LLM can achieve the level of sort of perfection that is needed in a lot of environments where you need sort of a 99.99% sort of accuracy rate. And fact-checking is no different. But what LLMs are great at is going out and assisting humans in that process. And so we created a fact checking aid that goes out, looks at a piece of content, identifies all the facts in that piece of content, and then goes out to the web and trusted sources, pulls in a bunch of information, And then makes a judgment on whether or not that statement is actually potentially true or potentially false, and then provides a bunch of sources that human editor can go to and and and investigate further. And so it’s access sort of a. Back check me that provides its judgment but it’s judgment will be wrong cuz i get it wrong and the ones that first use the problem but it produces a lot of efficiency for an editor that is already gonna do that process where they need to take a piece of content identify a fact. Go out. source and try and understand what is what is the truth, and and what is sort of the truth that they’re that they’re within the context of the of that article, and then share it, it can produce some pretty sort of what feels like some pretty magical answers at times where An article might say the boiling temperature of water is 90 degrees Celsius when everyone’s like, no, of course it’s 100 degrees Celsius, but it will call it true if the context of that article is mountain climbing at a certain elevation. So it’s like, given the context of this article, this fact that water boils at 90 degrees Celsius at this elevation is true. And it can feel like a magical response where it’s like, you understood the context of this entire article, the elevation that was mentioned above, or even the base camp or the camp that was mentioned, and then it references the elevation and then provides the right answer. So that can feel like a pretty cool

Rob Marsh: Eight in the fashion process but it does get things wrong at times sure so yeah it would it would identify maybe outlying situations that we wouldn’t necessarily be thinking of off the top of our head that are true. and it could pull some of that stuff in. So let me give you maybe, this is probably a ridiculous example, and I’m obviously asking you to maybe predict how it would figure this out, but I’m assuming you’ve heard the very famous quote that’s all over the internet that is, you can’t trust everything you’ve read on the internet by Abraham Lincoln. So if you were to try to source that, obviously there are literally thousands of pages that have that on the web. Would the AI pick that up as false or would it, because it can identify all of these sources out there, do you think it would not be able to identify that? Which again, it’s ridiculous because as humans, we all know that it’s a ridiculous quote, but I’m curious about that.

Jon Gillham: I think I think it would it would answer it as potentially some I’m guessing on how to answer it. I think the the sort of So I think it would struggle with that because it depends on the context of that statement. The statement that you just made, if you worded it as a common statement is, and then what you just said was factually true, that that is potentially a common statement that is shared all over the internet, attributed to Abraham Lincoln. But then I think so I think they like true or false binary classification, it would struggle with that because in certain in certain settings that is a true statement like what you said was true, that is a common statement that is shared on the internet. but where it would really shine is in the sort of description of why it made that judgment, where I think it would do a really good job because there is such a rich history of that, that there would be a really good explanation to say that this is used as an, you know, they would word it better than I could, but it’s like, this is used as an example of how you can’t trust the internet, depends on how this would be used. So I think it would provide a pretty useful explanation I’m not sure how it, whether or not, I think I’ve read a very accurate and helpful answer, but I don’t know how it, whether it would be true or false because it was, I think there’d be cases where that statement could be made and it’d be a true statement depending on what came before it.

Rob Marsh: That makes sense. I think this is maybe one of the areas where AI still really struggles or LLM struggle. And that is context shifting, you know, where things are one way in 80% of contexts, but in 20% of contexts, it’s different. And as humans, we’re really good at reading the context and changing the meanings and the machines just aren’t quite there yet.

Jon Gillham: Yeah, agreed.

Rob Marsh: OK, so that’s fact checking. And then it also checks for readability. These are tools we’re pretty familiar with because Grammarly has been around for a decade, tools like Hemingway, that kind of thing. Are you doing anything different, or is it sort of similar to what those tools are doing?

Jon Gillham: Sort of similar. One thing that’s different is we try and sort of look at, We try and apply sort of, there’s a level of science to sort of content that sometimes gets applied, sometimes doesn’t get applied. In the case of readability, if you were to sort of search before, you know, what is the optimal readability score to write for the internet? And it depends on, again, kind of your audience comes first. But when we looked at it, there’s this really clear distribution using a few specific scores around top ranking articles in Google. And it did not coincide with sort of the prevailing wisdom of like right in an eighth grade level, period. But what we’ve been able to see is like these scores, these certain scoring mechanisms, the flush concave reading ease, matches up to a really nice normal distribution around certain score ranges that exist in the top 20 results within Google. So if you’re trying to create content that will rank on the internet, you should rank you should try and aim to create content that has a readability score within this range, because that’s what the rest of the top ranking articles do. Now, obviously, there’s outliers, if you’re writing for, you know, intense medical sort of then sure, if you’re writing for children, sure. But that’s what we’re doing that’s different is sort of, instead of just providing sort of a non-data backed recommendation on a reading score, we have built our tool specifically for people that are publishing content on the web, and then we have sort of identified the best tests to use for the readability score, and then the best scoring range to be in where say we sort of break it down by distribution. So like is it one standard deviation, two standard deviations away from the average?

Rob Marsh: That’s really cool. And so does that do that by topic or do you have to tell it the audience? Like how does that identify?

Jon Gillham: Yeah. So it’s general, so it’s, it’s across all the topics that we looked at. And so we we provide the graph and and sort of we provide that range. And then you can pick what what your audience on whether you should be on the sort of upper end of that range or the lower end of that range, it’s unlikely you should be way off that that range on the readability score, unless you have a really strong reason to, if you’re trying to rank your if your primary audience is to The primary objective of that piece of content is to rank on Google and get traffic. We provide this range from six to nine, and based on your audience, you can adjust within that range that you think you should be.

Rob Marsh: Okay, yeah, like I said, that feels really incredibly useful actually, especially for a writer who is writing across different niches or industries, you know, maybe addressing different audiences. Does the tool also then make suggestions? Like here’s how you can dumb it down or smart it up as part of that?

Jon Gillham: So it will identify sentence by sentence, which parts make it challenging to read. So which parts have made it sort of, if it’s like, Parts of the writing that are at a very high level, it will identify those parts. If so, it can provide guidance on dumbing it down, making it easier to read, cleaning it up. It provides guidance on that on a sentence-by-sentence level. It doesn’t provide guidance in the other direction.

Rob Marsh: Okay. Yeah. And so it’s not actually rewriting, which seems like it would defeat the purpose of having this be an AI checker in the first place.

Jon Gillham: We’re wrestling with that topic because the same thing on grammar and spelling where we have some users that would love a sort of a fix all issues button, but then it will trigger the AI detection. And then, so we’re wrestling with that, because maybe there’s a use case there. But we got to really figure out how we don’t confuse users. Because, yeah, I think them clicking inside of an AI detection tool, a button that says fix all issues, and then it detects as AI, which would potentially be a confusing user experience.

Rob Marsh: Yeah, that seems to be one of the triggers for a human writer is that there are actually some errors in it. I mean, that’s certainly something I see with my students in the class that I teach. And maybe this is where those 50% human misidentifications start happening. But if I see a couple of grammatical errors, I’m like, oh, OK, yeah, this is clearly human written instead of AI.

Jon Gillham: Unless they added that to the prompt.

Rob Marsh: Yeah. Exactly. Yeah. Yeah. Please add three misspellings so that Rob Marsh doesn’t figure this out. Yeah. So what else is a tool doer? What’s the next evolution going to be?

Jon Gillham: We want to help publishers publish content and be as successful as possible by publishing that content. So trying to help them understand that the content will perform well within Google. So we have some interesting take on sort of content optimization. We have that in the works, which we’re really excited about on sort of current method around content optimization tools. I don’t know if you’re familiar with them like Surfer SEO or Market Muse or ClearScope. They look at the top 20 results and then do this sort of like, I’ll call it like dumb math and just sort of say, these are the keywords that you should include. I think there’s a smarter way to do that. And we’re testing that. And we’re excited for what’s going to come with that. And then any job that a copy editor does. So we try and sort of be that tool to help copy editors do their job far more efficiently and effective. One of those jobs that they do is need to make sure that a piece of content meets the editorial guidelines of a company. And so whether that’s always spelling this word a certain way, that might not be sort of the standard that standard sort of spelling, every sentence being or every paragraph being no more than three sentences, you know, whatever those editorial guidelines for company might be active voice, passive voice, whatever, you know, all that all that kind of stuff, trying to provide this sort of editorial guideline compliance component, where so an editor can sort of put in a piece of content, click a button in our tool, and then understand exactly how that piece of content matches up against each of the things that they need to check for AI plagiarism, fact checking, grammar, spelling, readability, editorial compliance with their company’s guidelines. And then ultimately, is it going to perform well in Google, since that’s a lot of what our users are using it for. So that’s, that’s what’s coming.

Rob Marsh: So I see a copy editor might want to use that basically to do 90% of their job, and then they can take the output and do a quick read through. It could save themselves a lot of time. I suppose a writer could do that as well to reduce the need for as much of a copy editor, or a client may be interested in doing that on the client side just to double check everything.

Jon Gillham: Yeah, we see a lot. So we see that the whole value, like we’re building it for the copy editor, but we’re seeing that whole value chain from the writer using it up front to make sure it sort of meets those requirements, because they know what they’re being judged against. And then the end client using it as well to say, am I ultimately getting content that meets my expectations? And so a lot of AI has caused a lot of problems in the world. in the world of writers. And one of the biggest problems has been the sort of lack of trust that has bubbled up around what they have done, they haven’t done, and what the expectations are on writers. And so we’re trying to be a tool that provides transparency between, from the client to the, you know, whoever’s in the middle, editor, agency, et cetera, to the writer that’s gonna get paid fairly for their work. So yeah, generative AI has definitely created a lot of challenges. Writers being a big, facing more of those challenges than probably any other industry. And hopefully we’re AI on sort of the good Terminator as opposed to the bad Terminator in this battle.

Rob Marsh: So, you’re kind of hinting at it, but one of the challenges that a lot of writers have had is they write something, they submit it to the client, the client runs it through an AI checker, it gets a false positive. The writer, you know, is, hey, I wrote this whole thing. So, you know, the trust is gone there. In order to fix that, is this something, would you recommend copywriters, content writers should have the tool or would I recommend this to my clients? You guys ought to get originality, get AI, run it through that because that will show you that it’s my copy.

Jon Gillham: What’s the dynamic there? First, false positives happen. We know that, especially at the volume that we’re running content through, and we understand how much it sucks when a writer gets falsely accused. It’s really tricky right now. So we’ve had, I’ll share a couple quick quick asides, but we had a writer writing for originality. We obviously use our own tool. And they swore up and down that they had not used AI. We then we have a so we have a free Chrome extension that lets people visualize the creation of a document.

Rob Marsh: And so it takes I can follow it can follow the change, the change tracking in a Google document.

Jon Gillham: Yes. So behind that change tracking, there’s a ton of data, there’s character by character, your metadata inside of a Google document. And then what our free Chrome extension does is it pulls that out, and then can create recreate the writing experience writing process. And if you see this sort of like, cut and paste minute cut and paste 1000 words, do one writing session 15 minutes per 1000 word article, And it hits it 100% on probability for being AI detected. I’m pretty confident that that was AI detected. So in our case, we had a writer. And when they swore that they hadn’t used AI, went into the Chrome extension, and then ultimately admitted that they had used AI. and so where they wouldn’t we coach them up on it and maybe still work with them and they don’t. So what do I recommend writers do is to use create the document in a Google document use a free Chrome extension like ours that will show the creation process. And then use a tool like Originality to know if they’re going to have a challenge, if it is going to be a false positive, they can show the client that they truly created that content themselves, and they can get fairly paid for it. The sort of The world I fear for writers is a world where there is zero protection against other people using AI. And I think there’s, you know, there’s a lot of really world-class writers, but AI can’t write the equivalent of now. There’s a lot of writers that it can do a lot better job at writing than it can write a lot better than I can. It can write a lot better than some writers that I’ve hired in the past. And those individuals are extremely at risk of their job being replaced. And based on the sort of the progress of AI, I think most writers are going to be at risk of their job being replaced by AI if there isn’t any kind of effective defense against saying what is human and what is AI.

Rob Marsh: Yeah, that makes a lot of sense. Okay, so maybe leaving the world of writing in AI, I don’t know if you’ve got thoughts on this, but where do you see AI going just more broadly? A lot of people, I mean, writers, obviously there’s a little bit of a threat there to our livelihoods, especially if we’re writing at the bottom half of that writing scale. We don’t have an original voice of our own that’s really difficult to copy or that we’re not able to write for our clients and their voices. Obviously, there’s risk there, but what about beyond writing? Do you see AI as a threat to the human race? Where are we at?

Jon Gillham: I would have answered differently probably two years ago or a year and a half ago. What we have seen is our detection. When we first launched, we thought GPT-4 would come out and we would no longer be able to detect content and that would be it. And we’re just enjoy the last few years of humanity before AI takes over. We all become paperclips. Yeah. Yeah. But what we have seen out of LLMs is that there has been this plateauing around intelligence. If we look at the leap from GPT-3 to 4 to kind of now, it’s, you know, this could age really poorly. But what we’re seeing is this plateauing around the capability of tools. And then we’re seeing this closing in the gap of our detection is better now than it has ever been, despite there being far more advanced models. And so my sort of And we’re seeing all of these open source models sort of closing in on the closed source models. 

And the way that what’s sort of what’s happening now is like, additional additional features are getting added. So it’s like the brain is already there. And sort of the analogy that I like to use right now is like, a spreadsheet is a pretty simple piece of technology. But the world would shut down if no one was allowed to use a spreadsheet for a day. Because it is sort of so pervasive into so many pieces of business operations. And I think it’s going to be similar-ish trend where I think there’s going to be a lot of people that do get displaced. Developers, writers, graphic artists are all at risk. I think it’s going to be hopefully a a force for sort of expansion of GDP, and then the creation of additional jobs and, and companies that used to need 20 people now need five people. And therefore, there’s, you know, five more, five more companies or more companies. So I think I’m optimistic. But I do think there will be disruption along the way.

Rob Marsh: I mean, disruption is not new. It happens every few decades, certainly every century or two. So this may just be the next big disruption. But until that really gets underway, tools like this are really helpful in protecting the things that we do as writers. So John, if people want to check out, well, first of all, the Chrome extension, is it also called Originality? Or is there a different name for it?

Jon Gillham: Yeah, so if you search originality.ai Chrome extension, it’s available.

Rob Marsh: Okay. And then obviously, originality.ai, where else can people go to connect with you or to find out more about, you know, how you think about this whole problem?

Jon Gillham: Yeah, happy to connect with anyone, anyone that’s sort of facing challenges around false positives. We’re always eager to sort of help guide people through that challenge. You can connect with me at John, J-O-N at originality.ai, or find me on LinkedIn.

Rob Marsh: Awesome. I appreciate your time and just talking through all of the stuff that is going on here because yeah, it is a challenge and there’s so many cool tools that can make this easier and better. So thank you for that.

Jon Gillham: Thanks Rob. Thanks for having me.

Rob Marsh: Thanks to John for helping me understand a bit more about the latest changes that we’re seeing in the world of artificial intelligence. You should definitely check out originality.ai at originality.ai. Obviously, AI has presented a challenge for writers over the past couple of years. We’ve seen a lot of clients shift their content plans to using more AI tools instead of content writers, and that has not always resulted in better content or copy. Many of them have changed back since then. There are, however, copywriters who are doing some pretty amazing things with AI. 

So what’s the difference? Well, they’re putting in the time to learn and use the tools. Originality, like I said, is definitely worth checking out, but it’s not the only tool you should be trying. You should be trying tools like Clod and ChatGPT and LeChat and writing tools like Writetoon. You should be using the AI features that are in tools like Notion and Hemingway and even Google Docs. This stuff is important. And if you want to be a copywriter or a content writer for more than the next year or two, you really do need to know how to use these tools. If you haven’t gotten started already, you can get my AI bullet writing prompt completely free at thecopywriterclub.com/aiwriter

It’s a pretty in-depth prompt that will help you write pretty amazing bullets, headlines, and subheads for your emails, for your subject lines, for your sales pages, however you want to use it. You can get that again at thecopywriterclub.com/AIwriter

 

Leave a Comment

WHAT’S YOUR COPYWRITING SUPERPOWER???

Discover your copywriter strengths then use them to land more baller
clients and strategically position yourself at the tippy top of the industry.

take the quiz