Verification 2013 Market Analysis article
Sub-headline: Verification 2013 Market Analysis article
New tools driven by artificial intelligence (AI) have been grabbing headlines over the past several months. The basic gist of these tools is that in response to specific prompts, they can “create” content (whether text, imagery or something else) much faster than a human ever could. Once they’ve been “trained” on extensive datasets, these tools can essentially predict what a user wants, often with stunning accuracy.
With the right set of queries, chatbots such as ChatGPT can write entire articles about specific topics in mere seconds. AI-driven image generators can instantaneously produce illustrations to represent abstract topics. Still other tools can synthesize video and audio content from the “raw material” of text and images.
This obviously has massive implications for creative fields, and in particular media organizations like CoinDesk. We’ve been researching AI tools for the past few months, while simultaneously observing how other media companies have been using AI. We want to empower our staff to take advantage of these tools to work more effectively and efficiently, but with a process that safeguards our readers from the well-documented problems that can arise with AI content – as well as the rights of the original content creators on which the generative content is based.
There are several use cases for AI in the process of creating content. This article deals with the main ones that are relevant to CoinDesk’s content team. It does not cover every use case, and does not speak to workflow outside of the process of content generation.
Generative text in articles
Current AI chatbots can create text from queries very quickly. Users can also customize the text with adjustments to the query — complexity, style, and overall length can all be specified.
However, an AI cannot contact sources or triage fast-breaking information reliably. While it performs some tasks extremely well, AI lacks the experience, judgment and capabilities of a trained journalist.
STORY CONTINUES BELOW
AI also makes mistakes, sometimes serious ones. Generative tools have been known to “hallucinate” incorrect facts and state them with confidence that they’re correct. They have occasionally been caught plagiarizing entire passages from source material. And even when the generated text is both original and factually correct, it can still feel bland or soulless.
At the same time, an AI can synthesize, summarize and format information about a subject far faster than a human ever could. AI can almost instantaneously create detailed writing on a specific subject that can then be fact-checked and edited. This has the potential to be particularly useful for explanatory content.