Work In Progress
Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!
Content generated by an algorithmic system is naturally suspect, since it functions as an imitation of human-level cognitive ability. Whether to scam unsuspecting people, convince the public of falsehoods, or even to turn in a homework assignment, there exist myriad incentives for humans to use AI-created content nefariously. Therefore, developers of AI technologies need to take responsiblity for the general repercussions of their systems. One straightforward way to limit the harms of AI-generated content is with a verifier, or a separate tool bundled with the AI that can detect whether or not a piece of content was generated by it.
Modern society, accelerated by technology, is experiencing a variety of growing pains—a world defined by physical constraints abutting against a society of instantaneous digital capabilities. Children can now take photographs of mathematical equations to receive an answer rather than having to understand the core material. On one hand, this can enable humans to reach greater potential. On the other hand, this directly challenges existing structures of authority. When a college essay can be generated by an algorithm, that essay loses purpose as a standard of competency. Similarly, if thousands of bots can easily generate disinformation, we lose trust in media and social platforms.
A verifier is a general term for a system that can detect when a piece of content was generated by an AI. There exist various kinds of verifiers, including ones designed to detect content created by any kind of machine, versus ones that are designed to detect the use of a specific model. Crucially, the concept of a verifier runs slightly counter to the basis of artificial intelligence itself. AI, by definition, attempts to mimic human-level capabilities as much as possible. So, the development of verifiers will undoubtedly become increasingly challenging as technology improves.
Nevertheless, developers of any AI platform that generates open-domain content (such as text, music, images, etc.) should release verification tools alongside their models as a matter of standard practice. When implementing a given technology as a service, the service provider should include verification as an additional feature or endpoint.
It is often assumed that a verifier must also be an AI, trained on outputs of the content generator. Many modern researchers assume that one must ‘fight fire with fire’ and use a symmetric architecture for verification. However, this need not be the case. In fact, more often the verifier should be more clever and operate in the domain of cryptography rather than general intelligence. For example, content created by an image generation API could include some cryptographic signature that is robust to compression, cropping, and other manipulation (e.g. digital watermarking or perceptual hashing).
In the case of interactive tools like chatbots, developers should include a special question that allows a user to directly inquire whether they are interfacing with a human or a machine.
- OpenAI GPT-2 Detector by HuggingFace
There are many examples of this, but Photomath is one prominent example of a tool intended to solve math problems with a photograph. It is important to note that in this case Photomath does actually attempt to help a student understand the core material with a step-by-step walkthrough of the solution. ↩︎