Vocal AI Policy
Part 5 of a Vocalite Manifesto
We've arrived at the last major installment to my Vocalite manifesto. If you've come along for the ride, my hat's off to you! These have not been pleasant pieces to write and I'm sure they're not fun reads either. If this is the first part you've encountered and want to know the why behind the piece, you can check out Part 1 linked below:
And this last one is a bit of a brutal one as it deals with the issue on Vocal that has concerned me the most: AI.
AI is everywhere and it's here to stay. I'm a fan of some of the ways it can be used and it's a pretty remarkable feat.
But on a writing platform like Vocal it has wreaked absolute havoc. Understandably, I think Vocal has struggled immensely with how to navigate this Brave New World of AI. People and institutions everywhere have been overwhelmed by the ramifications of its use and I expect that's not going to change anytime soon.
Yet despite the difficulties there has to be an approach for how to contend with AI issues. So what is Vocal's current AI policy?
Vocal has issued a statement that they will "not take a hard stance against AI-generated content" but that "AI-generated content should not be used as a substitute for the hard work created for and cherished by the Vocal community." Vocal's current AI policy that was published 2 years ago states that, "At this time, any content that utilizes AI to support its creation must contain a disclaimer."
A year ago in a Vocal Updates publication, the recognition was made that "instead of AI being used as a tool to brainstorm and help kickstart ideas that a person can evolve, it has been used by some to circumvent the creative process entirely".
In response Vocal rolled out the AI-generated disclosure labels and the option for flagging AI-generated content that is not disclosed on the Report Story feature. In the update piece, it was declared that "Stories that are published without this [AI] label and are found to be AI-generated will be subject to removal after internal review."
I appreciate that Vocal put changes in effect when they realized the problems AI was creating. The intentions were there as were attempts to deal with the problems AI had created.
But in the past year AI pieces have been awarded Challenge and Top Story prizes. Users publishing undisclosed AI content and leaving AI generated comments on pieces have placed over and over again on the Leaderboard. And the amount of undisclosed AI content that gets published to Vocal every single day is enormous.
Being able to report a story for AI seemed like a good step, but there's been little to no follow through on the threat to remove AI stories. In a situation where we were gathered together in a space and someone said, "Raise your hand if you've reported a story for undisclosed AI content and it was never removed," I can vividly imagine a scene like the one from Mean Girls where everyone in the gymnasium raises their hand because they've been personally victimized by Regina George.
I've reached out to Vocal a couple times when a user I've reported several times for undisclosed AI pieces continues to publish more AI content. When I do contact them I'll include lists of links to stories I've encountered that are AI generated. Responses I've received from Vocal encourage me to make sure I'm submitting the Report Story form because that will help the issue be investigated more quickly. But when I've never seen a story I've reported for AI removed, it seems futile. So I try to keep reporting, but I feel like I'm wasting my time.
A couple months ago I posted a piece encouraging other Creators to check for AI if they were suspicious of a user's comments or stories. All the AI comments that were being left on stories and the AI pieces being passed off as human created content were starting to eat at me. I labeled it a call to join an AI Vigilante Task Force. The name was meant to be somewhat humorous. But now I consider reasons for why vigilantes tend to emerge. One of the main causes is the ineffective efforts of the authority that is supposed to be policing illegal activity.
As I've said, I love Vocal. I think the Vocal Team works really hard at what they do. But in terms of policing the AI use that they've declared is against Vocal guidelines, I think the platform has fallen short. This is a problem for many reasons, but one that's been weighing on me a lot is how damaging it is to the community element of Vocal and the trust in the platform's integrity.
At one point I pleaded with Vocal to at least start putting the AI labels on pieces that hadn't been disclosed by the user publishing them even if they didn't remove them. At least people would know what kind of users they were dealing with if they checked their pages.
If Vocal doesn't actually do anything to enforce consequences for publishing undisclosed AI then every individual Creator is on their own to figure out if they're supporting a genuine Creator or an AI poser. If you've ever confronted some of the AI users on the platform you know things can get unpleasant and you've probably learned how manipulative they can be. There are some wonderful trusting people on this platform who I admire so much and having to witness them being duped because they decided to take someone at their word is sickening.
It's possible that being a teacher has made me jaded. But in situations like this where deception is one of the root problems I feel more equipped to assess when I'm being lied to.
"Gaslighting” is a term I hear on a regular basis. I think it’s one that gets employed pretty frequently if you’re a teacher these days. If you look up the definition you will see phrases like “psychological and emotional abuse”, “insidious manipulation”, “questioning of reality”, and “doubting one’s own sanity”.
It can be a pretty serious matter and can involve some pretty unbalanced power dynamics in a relationship. Sometimes it can occur for really long periods of time and victims suffer some pretty severe psychological distress. In most cases when us teachers use the term, it is usually to refer to instances that are not as sinister or damaging. But there is definitely still deceit and manipulation taking place. (I know I'm digressing a little but I promise I'm bringing it back around.)
A couple weeks ago I was covering another teacher's class and one of the students got up from her table, set her phone on a shelf, and recorded herself doing a dance. I made my way over to her table and declared that if I saw her phone again I would be taking it. Without the slightest hesitation she said, "What phone?" And her friends immediately gave me puzzled looks and jumped in with, "that wasn't a phone" and "it was just a calculator."
These kinds of deceitful interactions happen all the time in the classroom, but it's not the only setting it occurs in and it's not just teenagers that do it. There's been plenty of this lower level "gaslighting" happening on Vocal too.
The number of times on Vocal that I've seen comment threads between a legitimate Creator and an AI user in which the AI user responds with something along the lines of, "What AI? I'd never use AI. How dare you accuse me of something you can't prove," is a substantial one.
Even if you've examined tons of evidence and have no doubt about the fact that AI is being used it can throw you for a loop. I've had moments where even though I have no question about what I know to be true the fact that nothing is ever done about it starts to get to you. And I am beyond thankful for several Creators on Vocal who've reached out and said, "You're not crazy. I see what you're seeing."
Lines end up getting drawn in the sand and it feels like we get pitted against each other on different teams. Even though all of us who write our own work and want a great community of fellow writers are in fact on the same team. No one should have to try and confront an AI user for their undisclosed AI content. That's Vocal's job. But it's a job that's not getting done. And the negative impacts on the community here are real.
I said at the start of this project that my goal is to be solutions oriented. So what are the solutions?
Clearly both the volume of AI content and the amount of AI reports that Vocal has to investigate are just astronomically too high.
So let's tackle the first part of the issue. How can the amount of AI content getting published on the platform be reduced? I see two options and both of them have been suggested by multiple creators in the comments of other parts to this manifesto.
Option #1 - Eliminate non-Vocal+ memberships
Maybe that seems drastic, but even thinking about it from a business perspective, how do non-Vocal+ memberships benefit the platform? They don't contribute financially in any way and the massive amounts of non-Vocal+ content just clogs things up on the site.
Option #2 - Limit non-Vocal+ members to publishing one story a week and have their stories be more closely moderated for AI and plagiarism
This one is an idea Dharrsheena Raja Segarran suggested and I think it's a brilliant compromise if there needs to be one. It allows for non-Vocal+ participation but it restricts to a level that can be more faithfully checked for AI and plagiarism. And it makes a Vocal+ membership more of a premium to be able to publish at any time and receive quicker approval rates.
Yes, there are Vocal+ members publishing undisclosed AI pieces too, but I think they could be more easily investigated if there isn't such an overwhelming amount of content being published by non-Vocal+ members day in and day out.
And speaking of investigation, let's move to the second part of the issue, this reporting of individual stories isn't working.
I suggest that there needs to be a way to formally report a user, not just one of their stories. For whatever reason Vocal seems unable or unwilling to flag someone's AI piece or remove it on an individual basis despite saying there would be a consequence for publishing undisclosed AI.
It makes me wonder if the concern for false positives (human content being flagged as AI generated) with scanners is the reason. If so, I think taking many stories at a time or even a user's entire catalog of work into account for the sake of investigation is the way to go. False positives can happen, but they're rare. If a user is being investigated and they're having an abundance of AI content flagged, that removes the concern of an occasional false positive fluke. If out of 20 stories 15 of them are getting flagged as AI you don't have a false positive issue.
And whatever Vocal's "internal review" process is that they mentioned in their Update publication it probably involves some sort of scanning tool. Either that or it's a person sitting there coming up with their subjective perspective on whether or not a piece is AI. And that seems highly unlikely.
In the aforementioned Update piece it was stated that,
"We've tested a bunch of third-party tools (IE Copyleaks) that the community suggested, and they have been helpful in identifying unlabelled AI content. Unfortunately, they are very expensive at scale and because they are nascent, they produce too many false positives. So we started to build our own AI detection system, built on open source and our internal machine learning models."
I think it's great that Vocal is trying to develop their own unique system. But it's taking too long.
In the meantime I think they need to pay for the best quality tool on the market even if it's just to do more small-scale work for investigations of specific AI users who've been reported or verifying Top Stories and Challenge winners.
Maybe if Vocal eliminates non-Vocal+ memberships or limits their publishing ability considerably it will prompt more people to become Vocal+ members and help fund the purchase of AI detection tools while Vocal keeps working on their own?
The last piece to consider is kind of a big one: the consequences. What is the game plan for actually doing something when a user is publishing undisclosed AI? Well, what are the options?
- Zero-tolerance ban (the user's account is shut down and their AI content removed)
- Removal of AI pieces and added restrictions on publishing amount with increased moderation of content
- Just removal of AI pieces with no additional measures
- Having the AI content label added to the undisclosed AI pieces and added restrictions on publishing amount with increased moderation of content
- Just having the AI content label added to the undisclosed AI pieces with no additional measures
- Blacklisted from ever receiving Top Stories, Challenge Placements, and Leaderboard prizes
Personally, I'd like the consequences to be on the more extreme end so that they actually discourage the use of AI. It would finally bring an end to these repeat offenders who I think could care less about how many times they get their wrist slapped.
If there's some reason those more intense measures can't be taken right away, how about some progressive discipline (another teacher buzzword)? A three strikes you're out kind of process? Strike 1 and you get those AI labels slapped on and you get your publishing ability restricted. Strike 2 and your AI content gets removed, everything you publish is heavily moderated, and you're blacklisted from awards, then Strike 3 you're out of here!
It's been a long time since I became a Vocal+ member but if it's not already in place, make sure there's a statement that has to get signed which acknowledges that the publication of undisclosed AI will result in the user's Vocal+ status being revoked.
And hopefully if non-Vocal+ membership is removed or has more restrictive publishing abilities, the only users who would even need to be reported would be Vocal+ members. I imagine that would be a much more manageable caseload of investigations to conduct.
All of these ideas and considerations are aimed at how to make it possible for Vocal to actually follow through on what they've already declared in their AI policy. If undisclosed AI is being published there needs to be consequences as promised.
Maybe Vocal needs to consider a ban on all AI all together. I'm not sure why they would want the platform to be a place for publishing even properly disclosed AI. But I've said plenty already so I'll bring this to a close.
The current approaches to handling undisclosed AI on Vocal are not sufficient. Tactics need to change for the sake of the platform's reputation and the community it fosters. Reduce the flood of AI content coming in and stop the bleeding of platform resources to AI users. This isn't a problem that's easy to fix, but it's not going anywhere. I think it's well past time to make some hard decisions that aren't going to be universally popular for the benefit of the platform's integrity and the promotion of human creativity.
Author's Note: I'm behind on responding to everyone's comments and feedback but I hope to catch up today! And hoping to catch up on reading all the wonderful things you all have published the last few days! I apologize for how long these are and for firing them off pretty quickly. If you've read them all you've read a piece of about 9,000 words. So give yourself a pat on the back!
About the Creator
D.K. Shepard
Character Crafter, Witty Banter Enthusiast, World Builder, Unpublished novelist...for now
Fantasy is where I thrive, but I like to experiment with genres for my short stories. Currently employed as a teacher in Louisville.
Reader insights
Outstanding
Excellent work. Looking forward to reading more!
Top insights
Compelling and original writing
Creative use of language & vocab
Easy to read and follow
Well-structured & engaging content
Expert insights and opinions
Arguments were carefully researched and presented
Heartfelt and relatable
The story invoked strong personal emotions
On-point and relevant
Writing reflected the title & theme


Comments (13)
"Maybe Vocal needs to consider a ban on all AI all together. I'm not sure why they would want the platform to be a place for publishing even properly disclosed AI."- 100% would support this. I don't really get why they would want people creating with AI anyway 🤷🏾♂️ The two solutions you have proposed are excellent DK! I'm all for either getting rid of the non plus memberships, or restricting how many stories they can publish a week- I think this would actually solve a huge chunk of the AI issue.
The Mean Girls reference had me 💀😭 What an excellent effort for change. Thank you so much DK 🫶🏽
These articles are great… you’ve put a huge amount of effort into them & I agree wholeheartedly with it all. Thanks so much and don’t worry about replying to this. Great conclusion: “Maybe Vocal needs to consider a ban on all AI all together.”✅ “ how damaging it is to the community element of Vocal and the trust in the platform's integrity.” So true!🥹 ✅ Option #2 - Limit non-Vocal+ members to publishing one story a week and have their stories be more closely moderated for AI and plagiarism… it's a brilliant compromise if there needs to be one.” 👍🏼”if Vocal eliminates non-Vocal+ memberships or limits their publishing ability considerably it will prompt more people to become Vocal+ members and help fund the purchase of AI detection tools while Vocal keeps working on their own” I’m not a strict disciplinarian but can’t tolerate lying, stealing and cheating! However, I choose: “ Zero-tolerance ban (the user's account is shut down and their AI content removed).”😳 Agreed: “ Vocal+ member statement that has to get signed which acknowledges that the publication of undisclosed AI will result in the user's Vocal+ status being revoked.” Fabulous effort 🙌.
Idk, I have found them to be quite fun, tbh. What do you think of the fact Gen AI uses the work of other creators to train itself without gaining permission or crediting them for it? Some people think using it at all is unethical for this reason. I don't have a strong opinion about this at this point, but I'm curious about yours. I don't think Vocal should allow any written AI content at all, given that it drags down the overall standard of work published here. Like you said, not a good look. If anyone is found to be posting AI content, then restrict their publishing ability and blacklist them. I'd LITERALLY like to see them blacklisted, with a black stripe across the top of their page and all their stories, so everyone knows who and what they are and can avoid.
Great article, DK! I take the harshest standpoint. I am a Medium editor, and on New Writers Welcome, we have a very clear no AI policy in our submissions guidelines. You get caught, and you are immediately banned from the platform. I edit there on Fridays and use a couple of scanners- Copyleaks and Grammarly Pro. Sometimes, I have to investigate; this happened yesterday, but to date, I've had great results. In short, I'd love nothing more than to yank all the AI offenders off of Vocal.
Awesome job, DK! You've covered so much here and I wish I had something meaningful to add. I agree with all your points and really hope Vocal sees all your posts - I've been reading them all and just haven't had time to comment. I love Vocal but they really do need to step up on so many of these issues. Coming from a fellow teacher...the story about your students trying to gaslight you was so irritating 😅 I work with littles (TK) and they aren't quite that sly yet, although they do try to be sneaky, lol. Thank you so much for being our voice! <3
I loveeeeee the zero tolerance ban the most. The progressive discipline is good too, but I'd like it better if these AI frauds are kicked to the curb immediately. Also, gosh, the way those students tried to gaslight you! I certainly would have questioned my eyesight if I indeed mistook a calculator for a phone 😅😅 Oh and thank you so much for including my idea. I'm a non V+ as well and I would love to keep writing for this platform. Apart from me, there are other legit creators who are non V+ as well. And there are also legit creators who occasionally switch between V+ and non V+. So I was just trying to take everyone into account. Wow, 9000 words hahahahaha. It was soooo worth it. Now how do we shove all this down Vocal's throat? 🤣🤣🤣🤣🤣🤣
I think you have a lot of great ideas here, I haven’t gotten much beyond “AI bad”
Thank you for working so hard on this and I agree with all your suggestions. You’ve put forward your arguments eloquently and persuasively. I’m going to be blunt though - what are people doing on a creative writing platform if they’re using AI? Whatever toss I turn out, at least it’s my own. I am sick of this fakery and cheating. And worse than that is when people steal your work and pass it off as your own. Absolutely wild. Anyway …. Good job DK.
I enjoy these articles and I hope you continue with them. I would like to caution action before it turns into a witch hunt. I agree in my view, Ai used by any creator as a falsehood should have consequences. But we know that people may jump on a band wagon and wish to ‘out’ people with false accusations and once said may cause irreparable damage to said creator. NOW my real concern, for which I have suspicions. Those that still use Ai then rewrite the story and passing it off as original work. This I have noticed, if you read earlier work (known Ai) than latest works, there is a style change. Again not enough to ruin someone if I am incorrect. But it’s there. Thanks again for writing and sharing your manifestos.
Another wonderful article which I completely agree with. Something has to be or non-paying AI accounts will be all that's left.
First of all, wonderful job putting this all into words. Second, I take real issue with an AI policy developed so long ago. Two years ago, AI wasn't nearly what it is now. They need to sit down, maybe even with top creators, and rework their AI policy. Top to bottom. And realistically, if you're using AI to brainstorm, it won't ever get flagged. If you're using AI to write, of course it will and it should. I'm on the extreme end of this but if we don't want AI to take over all creative endeavors from Vocal to published books on shelves, it has to be a zero-tolerance, no AI allowed situation. Use your head, figure out how to write, and workshop your product. We all did it. For hundreds of years, people have figured out how to write stories. We can still do it ourselves and it's better that way. I recognize it's extreme but in sports, certain pieces of equipment aren't allowed and I think this is very similar. Something has come onto the scene, we've seen what happens and the unfair advantage it creates, and now it should be banned. I don't want to read AI. I can do that myself in ChatGPT in two seconds. I want to read human work, even if it isn't Shakespeare level. The doubt on what I'm reading makes me not want to read at all. I work on a board of directors so I understand the difficulty of managing large-scale finances, but the members and their needs have to come first. Their current AI detection is not working right. People are leaving. That's dollar signs too. A zero AI policy would, I think, eliminate the check on unlabelled AI content and give them the time to develop their own tool like they're already trying to do. But that's my two cents! Really remarkable job you've done with all of this. It doesn't read like 9k words either :) Effortless as always!
I love everything you've said here. The AI problem has been weighing on my mind a lot over the last few weeks. I strongly agree with the elimination or at least heavy restriction of free accounts. I also echo your feelings about the allowance of AI-generated content at all, even with the disclosure label. I'm genuinely curious what kind of true interest and engagement these pieces get. I know if I see that label, I don't read. I'd love to see this simply eliminated. You've raised some excellent questions and proposed reasonable, useful solutions. I'm interested to see Vocal's response, if any at all.