New tools driven by artificial intelligence (AI) have been grabbing headlines over the past several months. The basic gist of these tools is that in response to specific prompts, they can “create” content (whether text, imagery or something else) much faster than a human ever could. Once they’ve been “trained” on extensive datasets, these tools can essentially predict what a user wants, often with stunning accuracy.
With the right set of queries, chatbots such as ChatGPT can write entire articles about specific topics in mere seconds. AI-driven image generators can instantaneously produce illustrations to represent abstract topics. Still other tools can synthesize video and audio content from the “raw material” of text and images.
This obviously has massive implications for creative fields, and in particular media organizations like CoinDesk. We’ve been researching AI tools for the past few months, while simultaneously observing how other media companies have been using AI. We want to empower our staff to take advantage of these tools to work more effectively and efficiently, but with a process that safeguards our readers from the well-documented problems that can arise with AI content – as well as the rights of the original content creators on which the generative content is based.
There are several use cases for AI in the process of creating content. This article deals with the main ones that are relevant to CoinDesk’s content team. It does not cover every use case, and does not speak to workflow outside of the process of content generation.
Generative text in articles
Current AI chatbots can create text from queries very quickly. Users can also customize the text with adjustments to the query — complexity, style, and overall length can all be specified.
However, an AI cannot contact sources or triage fast-breaking information reliably. While it performs some tasks extremely well, AI lacks the experience, judgment and capabilities of a trained journalist.
AI also makes mistakes, sometimes serious ones. Generative tools have been known to “hallucinate” incorrect facts and state them with confidence that they’re correct. They have occasionally been caught plagiarizing entire passages from source material. And even when the generated text is both original and factually correct, it can still feel bland or soulless.
At the same time, an AI can synthesize, summarize and format information about a subject far faster than a human ever could. AI can almost instantaneously create detailed writing on a specific subject that can then be fact-checked and edited. This has the potential to be particularly useful for explanatory content.
Given its limitations and the potential pitfalls, the writing of an AI should be seen as an early draft from an inexperienced writer. In more illustrative terms, an AI tool is comparable to an intern who can write really fast. The analogy is apt: Typically, interns need a great deal of supervision in their work. They are often unfamiliar with the area they’re writing about and the audience they’re writing for, occasionally leading to serious errors. The editor assigned to their work needs to edit their work carefully, check the underlying facts and help tailor the article to the audience.
However, with the right editing process, the work of an intern can be made publishable relatively quickly, especially if the intern has command of the English language (something AI excels at). Similarly, with the right safeguards in place that both prioritize a robust editing process and target the specific pitfalls of AI, we believe that sometimes using generative text in articles can help writers and editors publish more information faster than a purely human-driven process.
With that in mind, CoinDesk will allow generative text to be used in some articles, subject to the following rules. The generative text must be:
Run through plagiarism-detection software
Checked that its sources are reliable (The tool must be capable of citing sources.)
Carefully fact-checked by the writer and editor, including quotations
Edited with an eye toward adding the “human” element
Disclosed. The fact that AI contributed to the article must be clear to the reader.
Given the requirements and the inherent limitations of AI with respect to the primary ingredients of journalism (e.g., talking to sources), the number of use cases for generative text are few. However, we see an opportunity for AI to assist in explanatory content, such as in this article here. In every case where generative text is used in the body of an article – whether in whole or in part – the AI’s contribution will be clear through both a disclosure at the bottom of the article and the AI’s byline: CoinDesk Bot.
CoinDesk will immediately discontinue the use of generative images in content. This is due to pending litigation around the use of proprietary imagery as “training” for various AI-driven image generators. We might make an exception in the case when the point of the article is to discuss generative images and the images are used in a way that constitutes fair use, but these would be on a case-by-case basis.
Using a generative image tool to help “inspire” a work of art created by a human is generally OK (this is akin to doodling on scrap paper) with the caveat that the human-created image should not be a de facto copy of the AI-generated image.
AI tools can generate or use human-sounding voices to read copy, effectively turning articles into audio clips or podcasts. Though CoinDesk doesn’t currently employ these tools, we see the practice as an evolution of tools that already exist for the visually impaired. If possible, the use of an AI voice generator will be disclosed in the accompanying show notes.
Social copy typically functions as a short summary of an article, crafted for a specific platform. Because of its short length, social copy is relatively easy to fact-check and edit, and some AI text tools may be adept at crafting text in the style of specific platforms. In addition, there is less expectation among social audiences that the text accompanying a linked story is original.
For these reasons, CoinDesk allows AI-generated social copy as long as the person preparing the post edits and fact-checks the copy (which is standard), and for the same reasons we don’t think disclosure is necessary (and would lead to some very clunky tweets). As with use in articles, using generative images in social posts is forbidden.
Like social copy, headlines are quickly fact-checked and edited. Because editors will always be directing the process, we view AI-written headlines as suggestions and are thus allowed. Disclosure isn’t necessary because this process does not add any new information, and editors will always check the headlines for accuracy and style. This also applies to subheadings and short descriptions.
Assistance with research
AI may sometimes be able to assist in summarizing long documents such as court filings, research papers and press releases, among others. As long as no part of the text generated is copied to a published article, this is generally allowed with no disclosure needed, with two important caveats:
Journalists should always be skeptical of the facts presented and how they’re prioritized (i.e., any fact that forms the basis of subsequent reporting must be verified).
An AI bot may miss crucial information. Finding valuable information “nuggets” in a court document, for example, is something best left to humans.
AI-generated story ideas
Any ideas generated by an AI will inherently need to be vetted and researched by the reporter or editor, so this is allowed. Unless actual text generated by the AI ends up in the final article, it’s not required to disclose that the idea was originally suggested via AI (although the author still may want to do so).
These are the rules of the road for CoinDesk as we travel forward into an AI-driven future. That road may change direction suddenly, expand to a multi-lane divided highway or perhaps even come to a dead end, so we expect these rules to evolve in the coming months and years. Regardless, we’re determined to tread into this new frontier, but to tread carefully. We want these rules to empower our content team to work smarter, using AI for the very specific tasks that machines are best at, so humans can focus on what they’re best at: journalism.