
For the 423rd episode of The Copywriter Club Podcast, we're checking in on the progress A.I. has made over the past year with Jon Gillham, founder of Originality.AI. We talked about how originality helps protect writers from false accusations of plagia...
Loading summary
John Gillum
It's time to turn your daydream into your dream job. Wix gives you the power to turn your passion into a moneymaker with a website that fits your unique vision and.
Rob Marsh
Drives you towards your goals. Let your ideas flow with AI tools.
John Gillum
That guide you, but give you full control and flexibility. Manage your business from one dashboard and keep it growing with built in marketing features. Get everything you need to turn your part time passion into a full time business. Go to wix.com.
Rob Marsh
Almost two years ago we realized that AI was not just a new idea that copywriters and content writers needed to pay attention to. Rather, it was a game changing technology that would impact almost everything writers do. The number of new tools and features that include and use AI to deliver their benefits is now in the thousands, maybe even the tens of thousands. That's a big part of why we launched the AI for Creative Entrepreneurs podcast last year and you can find more than 20 different conversations about AI on that podcast. It's easily available at our website as well. But as AI has become almost commonplace, we stepped away from doing so many interviews about artificial intelligence and just how it's changing our industry. And now I'm thinking it's about time that we checked in and see how the tech has changed over the past few months and what copywriters should be using it for if they're not using it already. Hi, I'm Rob Marsh and on today's episode of the Copywriter Club podcast, my guest is John Gillum, the founder of Originality AI. This tool is the most accurate AI detector available today, and what's more, in addition to checking for content created by AI, it's a fact checker, which is something that tools like Gemini and ChatGPT have struggled to do. It also checks for plagiarism and will help protect you against clients and others who might claim that your writing isn't original. We talked about how they do it and the risks that AI continues to pose for writers on this episode, so stay tuned. Before we get to that though, last summer we ran the last ever live cohort of the Copywriter Accelerator program, and since then, the only way to get the business building insights and strategies that we shared with more than 350 copywriters over the past seven years was to join the Fast Track version of the accelerator@the copyrightclub.com fasttrack. But I've been working on an updated version of that program and so the Fast Track is going to go away as well. So if you've been thinking about joining the Accelerator to learn all the things the accelerator teaches. Time is running out. What's coming next? Well, it's too soon to reveal what I'm working on, but if you join the Accelerator Fast Track before we launch it, you'll get early access to the new program absolutely free. That's in addition to the Accelerator Fast Track. Until then, you get all of the content, the eight modules, the blueprints, several different bonuses that are included in the Accelerator Fast Track. And when we launch the new program, like I said, sometime next year, you'll get that updated program too. Don't wait any longer to work on your business so that you've basically got your business ready to go in the new year. You'll have a steady flow of clients, a signature service that you're proud to offer them. You'll know how to manage clients, how to attract clients. Get all of that by visiting the copywriterclub.com fast track and learn more today. And now with that, let's go to our interview with John Gillum. Hey, John, welcome to the Copywriter Club podcast. We like to start with your story. So how did you become the founder of Originality AI and I guess also the co founder of Ad bank and Motion Invest and Content Refined. You've done a lot of this company starting thing.
John Gillum
Yeah, it's been a journey. Yeah. So my background was as a mechanical engineer. Did that in school and then went. Always knew that I wanted to get back to my hometown and started some sort of online projects. A lot of those projects all had sort of a central theme around creating content that would rank in Google, get traffic and monetize that, whether that was an e commerce site or software business. And then at one point we built up some extra capacity within the kind of the team that I had of writers that we were working with and then started selling that extra capacity. So built up a content marketing agency, sold it, and then had seen the wave of generative AI coming and looked to build a solution to try and help provide transparency between writers and agencies and, and clients. And that's where originality came from.
Rob Marsh
So as far as most people's experience with AI, it really started about two years ago when, you know, ChatGPT went live and suddenly everybody was like, oh my gosh, this is not what we were expecting. Or it's come along a lot faster. But you've been doing this a lot longer than that. Tell us, you know, basically how did you get interested in AI and get started with creating these kinds of tools?
John Gillum
Yeah. So in. Totally agree. I Think a lot of people sort of assume everything on the Internet that predated ChatGPT was human generated. But the reality is that there was other tools that predated ChatGPT. Specifically there's the GPT3 that got released by OpenAI in 2020 and then sort of from, for GPT2, 2020, and then from that there were many writing tools that were built off the back of it. So tools like Jasper AI. And we were at one point one of the heaviest users of Jasper, where we had a writing service where we transparently used AI content but stalled that content for a lot less than the human generated content in another part of the content agency. And so that was where we really started to see that the efficiency lift that came from using AI and then who gets to capture that efficiency? Is it the writer that copies and pasted out of chatgpt that then displaces a writer that did hard work on their own? And that was where we first started playing with AI and then using it extensively within our content marketing agency.
Rob Marsh
So before we go really deep on AI and the stuff that you've done, I'm interested as a founder, as a co founder, just what are some of the biggest challenges that you have faced as you've started your businesses? Again, we're talking to an audience of people who are running their own businesses, most of them. So I'm just curious how you've been able to succeed where so many others tend to fail.
John Gillum
I mean, there's certainly failures in there, so they're not all successes. So I think the common theme is when we're solving. The common themes on when there's success is probably two core things. One, that we're solving a problem that is meaningful and adding sort of significant value by helping to solve whatever that problem is, is 1. And the second piece is when there's been a. When there's been a really good team around that project, when the, when the co founders on it are great, when the initial hires are really, really good. Those are probably the two key things that have seemingly been the common traits. When the projects have gone well, and there's certainly projects that haven't gone well, lots of failures in there as well.
Rob Marsh
Interesting. You say that. I worked at a startup a decade or two ago and the CEO that came in to run it, it was a fun environment, really great place to work. You know, we had a successful exit, sold off to hp and I remember the CEO saying, you know, if you're lucky, you get to have an experience like this. Sometime in your career where you put together a great team, you've got a great product, you have this great experience. And then he said, and then you spend the rest of your life trying to replicate that at the next company or the company after. And there's a lot of truth to that.
John Gillum
There's a lot of truth in a lot of our weekly meetings at the all hands. Right now we're saying like, you know, these are currently the good old days. So like, enjoy them because we're going to be looking back at this. Like, hopefully we will be fortunate to be lucky enough to be looking back at these days as the, as the good old days. Because it is, it is a lot of fun right now. And I think, yeah, I certainly echo what he was saying in terms of, yeah, it doesn't. A lot of things need to go right to line up with sort of a, all the pieces to be in that sort of like a scaling stage of a company.
Rob Marsh
Okay, so let's talk about originality, AI and this tool that you have built. Basically my understanding of it is, and as I scan through and check it out, it does a few different things. Checking to see if there's plagiarism, if some content was written by AI, some additional things as well. To me, this seems incredibly useful for a couple of different audiences. One, I teach a college class at a, you know, the college is here. And so I'm always using AI checkers. You know, as I see submissions coming in from students, I'm like, that's suspicious. Let's, let's run it through the checker. But you know, obviously businesses hiring content writers, copywriters, want to see that their stuff's original. The problem is sometimes the checkers don't work the way they're supposed to. So tell us about originality, AI and the problems that you've been solving with it.
John Gillum
Yeah, so the problem we started out to solve and being from the world that we were in within content marketing is a content. A final step in the content quality Check. So kind of a final QAQC on a piece of content. And so historically that might mean readability, readability, check. Plagiarism, check. Okay, we're good to go to publish it now. That means checking for if it's been generated by AI or not. And we'll get into some of the challenges around that plagiarized if it is or isn't. I mean, no one plagiarizes anymore when you can just get AI to write it for you and then fact checking. So we Have a fact checker built in because that's sort of a new. An increase heightened sensitivity around fact checking with the sort of the prevalence of generative AI content and hallucinations and then some of the sort of standard readability checks, grammar and spelling checks. And. And so we aim to be that sort of complete content quality qa, QC step so that somebody can be really confident. We say hit publish with integrity, where they can take a piece of content, make sure it meets all the requirements, and then hits publish. Some of the challenges we talked about. AI detectors are highly accurate, but not perfect. And so the same way that the weather is, man sort of uses AI and it gets it right a lot of the time, but also gets it wrong to some extent. AI detectors are similar where they're trying. They're a classifier that aims to try and predict whether or not it thinks this piece of content is AI generated or human generated. And then it makes its best prediction, gets it right, calls AI. AI in our case, if it's just sort of a straight ChatGPT output 99% of the time, but it will get, it'll call human content AI 1 to 3% of the time, which works in certain settings, doesn't work in other settings, academia being one where really it's impossible to apply sort of an academic disciplinary action with a false positive rate above zero percent.
Rob Marsh
Yeah, it strikes me too that there's certainly because of the way AI is trained on human writing, at least originally, now I think there's more AI training data in the actual database. But the way it was trained, there have got to be, you know, 1 to 3% of humans that write the way AIs write anyway. They're boring writers or they, they have the cadence that we tend to see get picked up, or we, they use those same cliches that we tend to see a lot of. So it makes a lot of sense to me that those writers are going to come up as AI because, well, AI has been trained to look for this stuff.
John Gillum
Yeah. I mean, so it really produces a very like, you know, by definition it's like all this data has gone into it all in this massive training set and then it ends up producing, I mean, you can ask it to produce this sort of range of outcomes of like, hey, write a Nobel Peace Prize acceptance speech in the style of Dave Chappelle. Right. That's going to be a pretty unique piece of content that doesn't look like typical AI content, but there's certainly some ticks to AI content that we feel like we can pick up on. But yeah, there are definitely some people that have a style that is very similar to sort of the base style of most LLMs. And it can be extra frustrating for them because they end up getting false positives at a higher rate than somebody else might.
Rob Marsh
Yeah. So are you saying that, you know, if I have AI write something and I try to spice it up by saying, you know, write like Dave Chappelle or, you know, make this humorous or silly or something like that, originality can pick that up at 99 to 100%. It can still tell that it's written by an AI.
John Gillum
Yeah. So that's the big difference between a human's ability to detect AI and an AI's ability to detect AI. Human's ability. We can get fooled very easily. We have a couple cognitive biases that are working against us. We have an overconfidence bias, and then we have a pattern recognition bias. So if you ask a room who's an above average driver, 80% of the room puts up their hand, and the stock market and casinos are sort of built off of this sort of human's capability to think that they see patterns when they actually don't. And so in all studies, the sort of human's ability to detect AI is like 50 to 60% accurate. And it gets worse when you apply these sort of prompts that make the content more unique than this, than the straight, recognizable chatgpt kind of content. Whereas AI detectors are picking up a lot more signals than what humans are capable of identifying. And its accuracy stays very high and 99% for even the most sort of challenging prompts for a human to try and identify.
Rob Marsh
So how do you solve that problem? What does your tech do that's not being used by everybody else?
John Gillum
Yeah. So other detectors have, in all benchmarks were the most accurate, but there's other detectors that are close. The sort of unsettling thing. And whatever AI system exists in the world faces some of the same challenges where if you ask ChatGPT or the makers of ChatGPT and say, why did it respond like that, they struggle to answer. They can talk about the training data, they can talk about the training method, but they can't say why it responded like that. And in a similar way, our detector is picking up on patterns that we don't know. We understand how we trained it, we understand the efficacy test that we put it through, we understand the benchmark test that we put it through, but we don't. We can't say this piece of content was identified as AI for these reasons. And that's, that's. I wish we could, but. But that's just not how AI works.
Rob Marsh
Yeah. So this is part of the black box, you know, trouble that leads some of us to think that maybe AI is doing stuff in the background that we're not even aware of and will someday take over the world.
John Gillum
Exactly. It's, it is, it is an unsettling, it is an unsettling experience to not to create something and not understand exactly how it works.
Rob Marsh
So are there other challenges around then AI generated content and identifying it that we haven't chatted through or hit on?
John Gillum
So I think some other challenges related to AI content is around. I think a lot of editors used to sort of use the quality of the content as a tell on whether or not they needed to go deeper on fact checking. Usually sort of like factually accurate information was also well written information. And what the sort of the challenge that generative AI has produced is that that sort of trigger for this does not feel like a very well researched topic and article is no longer the same problem. Whereas now highly like really, really well written, like grammatically correct written AI generated content. It can also be very factually wrong through hallucinations and having just made stuff up, but convincingly so. And so I think that the capability to. And the level of intensity that needs to be applied now to fact checking of all content because where generative AI has sort of poisoned the content is it becomes harder to understand in today's sort of environment with generative AI.
Rob Marsh
Yeah. So some examples of that might be if you're, well, you could be writing say a paper for school or something where you're saying, hey, give me 10 sources for this particular kind of an idea or a scientific study or something like that. Or if you're writing content for a client, you might be looking for five real life examples of this particular marketing thing that happened. Know that. And then the LLM will just hallucinate two of the five, you know, just make it up. Sounds real, but they're not. So how do you guys fix that? Because it seems like you're using an LLM that's making stuff up. How do you make sure that it can tell that it's making stuff up?
John Gillum
Yeah. So we AI can. There are very few settings, very few times where an AI or an LLM can achieve the level of sort of perfection that is needed in a lot of environments where you need sort of a 99.99% sort of accuracy rate. And fact checking is no different. But what LLMs are great at is going out and assisting humans in that process. And so we created a fact checking aid that goes out, looks at a piece of content, identifies all the facts in that piece of content, and then goes out to the web and trusted sources, pulls in a bunch of information, and then makes a judgment on whether or not that statement is factually potentially true or potentially false, and then provides a bunch of sources that a human editor can go to and investigate further. And so it acts as sort of a fact checking aid that provides its judgment, but its judgment will be wrong because AIs get, get it wrong and they're the ones that first introduce the problem. But it produces sort of a lot of efficiency for an editor that is already going to do that process where they need to take a piece of content, identify a fact, go out, source, and try and understand what is, what is the truth and what is sort of the truth that they're, that they're within the context of the, of that article, and then share it. It can produce some pretty sort of, what feels like some pretty magical answers at times where an article might say the boiling temperature of water is 90 degrees Celsius. When you know everyone's like, no, it's of course it's 100 degrees Celsius, but it will call it true if the context of that article is mountain climbing at a certain elevation. So it's like given the context of this article, this fact that water boils at 90 degrees Celsius at this elevation is true. And it can feel like a magical response where it's like, you understood the context of this entire article, the elevation that was mentioned above, or even the base camp or the camp that was mentioned at, and then it references the elevation and then provides the right answer. So that can feel like a pretty cool aid in the fact checking process. But it does get things wrong at times.
Rob Marsh
Sure. So yeah, it would, it would identify maybe outlying situations that we wouldn't necessarily be thinking off, off the top of our head that are true and it could pull some of that stuff in. So let me, let me give you maybe this is probably a ridiculous example and I'm obviously asking you to maybe predict how it would figure this out. But you've, I'm assuming you've heard the very famous quote that's all over the Internet. That is, you can't trust everything you've read on the Internet by Abraham Lincoln. So now if you, if you were to try to source that, obviously there, there are literally thousands of Pages that have that, you know, on the web. Would the AI pick that up as false or would it. Because it can identify all of these sources out there. Do you think it would. It would not be able to identify that, which, again, it's ridiculous because as humans, we all know that it's. It's a ridiculous quote. But I'm curious about that.
John Gillum
I think, I think it would answer it as potentially some. I'm guessing on how I would answer it. I think the sort of. So I think it would struggle with that because it would say. Because it depends on the context of that statement. I mean, that a common statement, a common. Like the statement that you just made, if you worded it as a common statement is. And then that what you just said was factually true, that that is a potentially a common statement that is shared all over the Internet, attributed to Abraham Lincoln. But then I think so I think the like, true, false, binary classification, it would struggle with that because in certain, in certain settings that is a true statement. Like what you said was true. That is a common statement that is shared on the Internet. But where it would really shine is in the. Is in the sort of description of why it made that judgment. Where I think it would do a really good job because there is such a rich history of that that there would be a really good explanation to say that this is used as an, you know, they would word it better than I could. But it's like this is used as an example of how you can't trust the Internet. Depends on how this would be used. So I think it would provide a pretty useful explanation. I'm not sure how it would. Whether or not I think it would provide a very accurate and helpful answer. Answer. But I don't know how it would, whether it would be true or false, because I think there'd be cases where that statement could be made and it be a true statement depending on what came before it.
Rob Marsh
That makes sense. I think this is maybe one of the areas where AI still really struggles or LLMs struggle. And that is context shifting. Where things are one way in 80% of contexts, but in 20% of context, it's different. And as humans, we're really good at reading the context and changing the meanings and the machines just aren't quite there yet.
John Gillum
Yeah, agreed.
Rob Marsh
Okay, so that's fact checking. And then it also checks for readability. These are tools we're pretty familiar with because Grammarly has been around for a decade. Tools like Hemingway, that kind of thing. Are you doing anything different or is it Sort of similar to what those.
John Gillum
Tools are doing, sort of similar. One thing that's different is we try and sort of look at, we try and apply sort of there's a level of science to sort of content that sometimes gets applied, sometimes doesn't get applied. In the case of readability, if you were to sort of search before you know what is the optimal readability score to write for the Internet and it depends on again kind of your audience comes first. But when we looked at it there's, there's this really clear distribution using a few specific scores around top ranking articles in Google and it did not coincide with sort of the prevailing wisdom of like right in an 8th grade level period. But what we've been able to see is like these scores represent these certain score scoring mechanisms. So the Flesch Kincaid reading ease matches up to a really nice normal distribution around certain score ranges that exist in the top 20 results within Google. So if you're trying to create content that will rank on the Internet, you should try and aim to create content that has a readability score within this range because that's what the rest of the top ranking articles do. Now obviously there's outliers. If you're writing for, you know, intense medical sort of study then sure, if you're writing for children, sure or, but so that, that's what we're doing that's different is sort of instead of just providing sort of a non data backed recommendation on a reading score, we have built our tool specifically for people that are publishing content on the web and then we have sort of identified the best test to use for the readability score and then the best rate, the best scoring range to be in where say we sort of break it down by distribution. So is it one standard deviation, two standard deviations away from the average?
Rob Marsh
That's really cool. So does that do that by topic or do you have to tell it the audience? How does that identify?
John Gillum
Yeah, so it's general, so it's across all the topics that we looked at. And so we provide the graph and sort of we provide that range and then you can pick your audience on whether you should be on the sort of upper end of that range or the lower end of that range. It's unlikely you should be way off that range on the readability score unless you have a really strong reason to. If you're trying to rank, if your primary audience is to, your primary objective of that piece of content is to rank on Google and get traffic. We provide the sort of this range from sort of 6 to 9. And based on your audience, you can sort of adjust within that range that you think you should be.
Rob Marsh
Okay, yeah, like I said, that feels really incredibly useful actually, especially for a writer who is writing across different niches or industries, maybe addressing different audiences. Does the tool also then make suggestions like here's how you can dumb it down or smart it up as part of that?
John Gillum
So it will identify sentence by sentence, which parts make it challenging to read. So which parts have made it sort of. If it's like parts of the writing that are at a very high level, it will identify those parts. If so, it can provide guidance on dumbing down, making it easier to read. Cleaning up the. Cleaning it up. It provides guidance on that on a sentence by sentence level. It doesn't provide guidance in the other direction.
Rob Marsh
Okay, yeah. And so it's not actually rewriting, which seems like it would defeat the purpose of having this be an AI checker in the first place.
John Gillum
So we're wrestling with that topic because the same thing on grammar and spelling where we have some users that would love a sort of a fix all issues button, but then it will trigger the AI detection. And then. So we're wrestling with that because maybe there's a use case there, but we got to really figure out how we don't confuse users because yeah, I think them clicking inside of an AI detection tool, a button that says fix all issues and then it detects it as AI would potentially be a confusing user experience.
Rob Marsh
Yeah, that seems to be one of the triggers for a human writer is that there are actually some errors in it. I mean, that's certainly something I see with my students in the class that I teach. You know, it's. And maybe this is where those 50%, you know, human misidentifications start happening. But you know, if I see a couple of grammatical errors, I'm like, oh, okay, yeah, this is clearly human written instead of AI.
John Gillum
Unless they added that to the prompt.
Rob Marsh
Yeah, exactly. Yeah. Please add three misspellings so that Rob Marsh doesn't figure this out.
John Gillum
Yeah.
Rob Marsh
So what else is the tool do? Or where. What's the next evolution going to be?
John Gillum
Yeah, so we think we want to help publishers publish content and be as successful as possible by publishing that content. So trying to help them understand that the content will perform well within Google. So we have some. An interesting take on sort of content optimization. We have that in the works, which we're really excited about on sort of the current method around content optimization tools. And if you've familiar with Them like Surfer SEO or Market Muse or Clearscope, they look at the top 20 results and then do this sort of like, I'll call it like dumb math and just sort of say these are the keywords that you should include. I think there's a smarter way to do that. And we're testing that and we're excited for what's going to come with that. And then any job that a copy editor does. So we try and sort of be that tool to help copy editors do their job far more efficiently and effective. One of those jobs that they do is need to make sure that a piece of content meets the editorial guidelines of a company. And so whether that's always spelling this word a certain way, that might not be sort of the standard. That standard sort of spelling every sentence being, or every paragraph being no more than three sentences, you know, whatever those editorial guidelines for a company might be, active voice, passive voice, whatever, you know, all that kind of stuff. Trying to provide the sort of editorial guideline compliance component where so an editor can sort of put in a piece of content, click a button in our tool and then understand exactly how that piece of content matches up against each of the things that they need to check for AI plagiarism, fact checking, grammar, spelling, readability, editorial compliance with their company's guidelines. And then ultimately is it going to perform well in Google since that's a lot of what our users are using it for. So that's what's coming.
Rob Marsh
So I see a copy editor might want to use that basically to do 90% of their job and then they can take the output and do a quick read through it could save themselves a lot of time. I suppose a writer could do that as well to, you know, reduce the need for, you know, as much of a copy editor or a client, you know, may, may be interested in doing that on the client side just to double check everything.
John Gillum
Yeah, that we see. We see a lot. So we see that the whole, the whole like we're building it for the copy editor, but we're seeing that whole value chain from the writer using it up, up front to make sure it sort of meets those requirements because they know what they're being judged against. And then the, you know, the end client using it as well to say, am I ultimately getting content that meets my expectations? And so, you know, a lot of AI has caused a lot of problems in the world for, in the world of writers and one of the biggest problems has been the sort of lack of trust that has bubbled up around what they have done, they haven't done and what the expectations are on writers. And so we're trying to be that a tool that provides transparency between from, you know, from the client to the. Through the, you know, whoever's in the middle editor agency, et cetera, to the writer that's gonna get paid fairly for their work. So, yeah, it's a generative AI has definitely create a lot of challenges. Writers being a big. Facing more of those challenges than probably any other industry. And hopefully we're AI on sort of the good Terminator as opposed to the Bad Terminator in this battle.
Rob Marsh
So you're kind of hinting at it, but one of the challenges that a lot of writers have had is they write something, they submit it to the client, the client runs it through an AI checker, gets a false positive, the writer, hey, I wrote this whole thing, so the trust is gone there. In order to fix that. Is this something would you recommend? Copywriters, content writers should have the tool, or do they do what I recommend this to my clients. Say you guys ought to get originality AI run it through that, because that will show you that it's my copy. What's the dynamic there?
John Gillum
Yeah, so first false positive happen. We know that, especially at the volume that we're running content through. And we understand how much it sucks when a writer gets falsely accused. It's really tricky right now. So we've had. I'll share a couple quick asides, but we had a writer writing for originality. We obviously use our own tool and they swore up and down that they had not used AI. We then we have a free Chrome extension that lets people visualize the creation of a document. And so it takes.
Rob Marsh
It can follow the change tracking in a Google Document.
John Gillum
Yes. So behind that change tracking, there's a ton of data. There's character by character, your metadata inside of a Google Document. And then what our free Chrome extension does is it pulls that out and then can create, recreate the writing experience, the writing process. And if you see this sort of like cut and paste, two minute cut and paste, thousand words, one writing session, 15 minutes per thousand word article, you know, and it hits it 100% on probability for being AI detected. I'm pretty confident that was AI detected. And if so, in our case, we had a writer and when they swore that they hadn't used AI, went into the Chrome extension and then ultimately admitted that they had used AI and swore they wouldn't. We coach them up on it and maybe still work with them. And they don't so what do I recommend writers do is to use create the document in a Google document, use a free Chrome extension like ours that will show the creation process and then use tool like originality to know if they're going to have a challenge, if it is going to be a false positive, they can show the client that they truly created that content themselves and they can get fairly paid for it. The world I fear for writers is a world where there is zero protection against other people using AI. And I think there's a lot of really world class writers that AI can't write the equivalent of now. There's a lot of writers that it can do a lot better job at writing than it can write a lot better than I can. It can write a lot better than some writers that I've hired in the past. And those individuals are extremely at risk of their job being replaced. And based on the sort of the progress of AI, I think most writers, you know, are going to be at risk of their job being replaced by AI if there isn't any kind of effective defense against saying what is human and what is AI?
Rob Marsh
Yeah, that makes, that makes a lot of sense. Okay, so maybe leaving the world of writing in AI. I don't know if you've got thoughts on this, but where do you see AI going just more broadly? A lot of people, you know, well, I mean writers, you know, obviously there's a little bit of a threat there to our livelihoods, especially if we're writing at the bottom half of the that writing scale. We don't have an original voice of our own that's really difficult to copy or that we're not able to, to write for our clients and their voices. Obviously there's risk there. But what about beyond writing? Do you see AI as a threat to the human race? Where are we at?
John Gillum
I would have answered differently probably two years ago or a year and a half ago. What we have seen is our detection. When we first launched, we thought GPT4 would come out and we would no longer be able to detect content and that would be it. You know, we're just enjoy the last few years of humanity before AI takes over.
Rob Marsh
We all become paperclips. Yeah, yeah.
John Gillum
But what we have seen out of LLMs is that there has been this plateauing around intelligence. If we look at the leap from GPT 3 to 4 to kind of now, it's, you know, this could age really poorly. But what we're seeing is this plateauing around the capability of tools and then we're Seeing this closing in the gap of our detection is better now than it has ever been, despite there being far more advanced models. And so my sort of. And we're seeing all of these open source models sort of closing in on the closed source models. And the way that what sort of what's happening now is like additional, additional features are getting added. So it's like the brain is already there. And so the analogy that I like to use right now is like a spreadsheet is a pretty simple piece of technology, but the world would shut down if no one was allowed to use a spreadsheet for a day because it is sort of so pervasive into so many pieces of business operations. And I think it's going to be similar ish trend where I think there's going to be a lot of people that do get displaced. Developers, writers, graphic artists are all at risk. I think it's going to be hopefully a force for sort of expansion of GDP and then the creation of additional jobs. And companies that used to need 20 people now need five people, and therefore there's five more companies or more companies. So I think I'm optimistic. But I do think there will be disruption along the way.
Rob Marsh
I mean, disruption is not new. It happens every few decades, certainly every century or two. So this may just be the next big disruption. But until that really gets underway, tools like this are really helpful in protecting the things that we do as writers. So, John, if people want to check out, first of all, the Chrome Extension, is it also called Originality or is there a different name for it?
John Gillum
Yes, if you search Originality AI, Chrome Extension, it's available.
Rob Marsh
Okay. And then obviously Originality AI. Where else can people go to connect with you or to find out more about how you think about this whole problem?
John Gillum
Yeah, happy to connect with anyone. Anyone that's sort of facing challenges around false positives. We're always eager to help guide people through that challenge. You can connect with me at John JonOriginality AI or find me on LinkedIn.
Rob Marsh
Awesome. I appreciate your time and just talking through all of the stuff that is going on here because, yeah, it is a challenge and there are so many cool tools that can make this easier and better. So thank you for that.
John Gillum
Thanks, Rob. Thanks for having me.
Rob Marsh
Thanks to John for helping me understand a bit more about the latest changes that we're seeing in the world of artificial intelligence. You should definitely check out Originality AI at Originality AI. Now, obviously AI has presented a challenge for writers over the past couple of years. We've seen a lot of clients shift their content plans to using more AI tools instead of content writers, and that has not always resulted in better content or copy. Many of them have changed back since then. There are, however, copywriters who are doing some pretty amazing things with AI. So what's the difference? Well, they're putting in the time to learn and use the tools. Originality, like I said, is definitely worth checking out, but it's not the only tool that you should be trying. You should be trying tools like Claude and ChatGPT and LeChat, and writing tools like Write. You should be using the AI features that are in tools like Notion and Hemingway and even Google Docs. This stuff is important, and if you want to be a copywriter or a content writer for more than the next year or two, you really do need to know how to use these tools. If you haven't gotten started already, you can get my AI bullet writing prompt completely free at the copywriterclub.com forward/air It's a pretty in depth prompt that will help you write a pretty amazing bullets, headlines and subheads for your emails, for your subject lines, for your sales pages, however you want to use it. You can get that again at the copywriterclub.com forward/air that's the end of this episode of the Copywriter Club podcast. The intro music was composed by copywriter and songwriter Addison Rice. The outro was composed by copywriter and songwriter David Munter. If you've enjoyed this episode, please share it with someone that you know. Or if you don't know anyone, you can always just leave a review at Apple Podcasts, at Spotify or wherever you listen to your favorite podcasts. And let me just add this. If you know someone who would be a great guest for the show, will you please email me and let me know? I'm rob@the copywriterclub.com and let me know what you're hoping that they might share on the show. I'd really love your feedback on that. Thanks for listening. See you next week. Copywriters coming together to help the world.
John Gillum
Write better copy and make more money. Here I am. R Copywriters Club can make you lots of money. Listen to Akira and R Copywriters Club can make you lots of money as long as you listen through the whole damn episode.
The Copywriter Club Podcast: Episode #423 – Copy, Originality, and A.I. with John Gillum
Released on November 26, 2024
Hosts: Kira Hug and Rob Marsh
Guest: John Gillum, Founder of Originality AI, Co-founder of Ad Bank, Motion Invest, and Content Refined
In Episode #423 of The Copywriter Club Podcast, hosts Rob Marsh and Kira Hug delve into the evolving landscape of copywriting amidst the rise of artificial intelligence (AI). Their guest, John Gillum, founder of Originality AI, brings invaluable insights into AI detection tools, the challenges posed by generative AI, and strategies for maintaining originality and integrity in copywriting.
Background and Entrepreneurship:
John Gillum began his career as a mechanical engineer before transitioning into the online content space. He founded several ventures, including Ad Bank, Motion Invest, and Content Refined, before establishing Originality AI. His transition was driven by a passion for creating content that ranks on Google and monetizes effectively.
Founding Originality AI:
Originality AI was born out of the need for transparency between writers, agencies, and clients in the wake of the generative AI boom. As Rob Marsh explains:
“Originality AI is the most accurate AI detector available today, and what's more, it’s also a fact checker, which is something that tools like Gemini and ChatGPT have struggled to do.” ([09:39])
AI’s Impact on Writing:
Rob Marsh highlights that AI has transformed copywriting, not just as a tool but as a fundamental shift in how content is created and evaluated. He notes,
“AI was a game-changing technology that would impact almost everything writers do.” ([00:34])
Early Adoption of AI Tools:
John shares that even before the widespread popularity of ChatGPT, tools like GPT-3 and Jasper AI were integral to his content marketing strategies. This early adoption provided significant efficiency gains but also raised concerns about originality and the displacement of human writers.
Comprehensive Content Quality Assurance:
Originality AI serves as a multi-faceted tool designed to ensure content integrity through various checks:
AI Detection:
“AI detectors are trying to predict whether a piece of content is AI-generated... Originality AI can detect it with 99% accuracy.” ([11:50])
Fact Checking:
“We created a fact-checking aid that identifies facts in content and verifies them against trusted sources, assisting human editors in the process.” ([18:17])
Plagiarism Detection:
Readability Analysis:
“We aim to align readability scores with what top-ranking Google articles exhibit, providing data-backed recommendations.” ([23:45])
Editorial Guideline Compliance:
Upcoming Features:
Originality AI is expanding to include smarter content optimization tools that surpass traditional SEO methods by integrating more nuanced data analysis.
Accuracy and False Positives:
While Originality AI boasts high accuracy, John acknowledges the inherent limitations:
“AI detectors are similar to weather predictions—they’re highly accurate but not perfect. Originality AI calls AI-generated content with 99% accuracy, but there’s still a 1-3% false positive rate.” ([11:50])
Human vs. AI Detection Capabilities:
Rob observes that humans are often misled by cognitive biases and pattern recognition, leading to only 50-60% accuracy in detecting AI-generated content. In contrast, Originality AI maintains consistent high accuracy across various content styles.
Erosion of Trust:
The proliferation of AI-generated content has led to skepticism about the originality of writers’ work. John highlights:
“Generative AI has created a lack of trust around what writers have done and haven’t done, complicating client relationships.” ([32:29])
Protecting Human Writers:
Originality AI aims to restore trust by providing verifiable proof of human-generated content. John recommends writers use tools like their Chrome extension to visualize the creation process, ensuring transparency and safeguarding against false accusations.
Future Outlook:
John is optimistic about AI’s role in expanding GDP and creating new jobs, despite the disruption it causes. He likens AI’s integration to that of spreadsheets—essential yet disruptive, fundamentally altering workflows but ultimately beneficial.
“I’m optimistic, but I do think there will be disruption along the way.” ([38:45])
Enhancing Content Optimization:
Originality AI plans to introduce advanced content optimization features that provide smarter, data-driven recommendations beyond existing SEO tools like Surfer SEO or Market Muse.
Editorial Efficiency:
The tool aims to streamline the copyediting process, handling up to 90% of a copy editor’s tasks, which allows human editors to focus on nuanced aspects of content quality and compliance.
Call to Action:
John encourages writers and publishers to adopt Originality AI to navigate the challenges posed by AI-generated content, ensuring integrity and maintaining trust with clients.
“We’re trying to be that tool that provides transparency from the client to the writer, ensuring fair compensation and preserving the writer’s reputation.” ([31:11])
Episode #423 of The Copywriter Club Podcast offers a comprehensive exploration of the intersection between copywriting and AI. John Gillum’s insights into Originality AI provide valuable perspectives on maintaining originality, ensuring content integrity, and navigating the evolving landscape of AI in the writing industry. For copywriters seeking to uphold their craft amidst technological advancements, adopting robust tools like Originality AI is essential for future-proofing their careers.
Connect with John Gillum and Originality AI:
Hosts’ Resources:
Join the conversation and ensure your copywriting remains authentic and impactful in the age of AI.