TL;DR

As AI-driven personas become more prevalent online, it’s crucial to establish mechanisms to verify the authenticity of online personas and content. This post aims to introduce, at a high level, three such potential mechanisms. Addressing these areas can enhance online discourse and combat misinformation, although challenges remain in creating platform-agnostic solutions.

Scoping the Problem

In my last post I talked about why I think we’re going to see the internet move from an ‘anonymous first’ place of expression to an ‘identify focused’ one. This change is being driven primarily by the fact that we’re now entering an era where more and more personas on the internet could be AI powered bots vs real people. With this change it is reasonable to want to have a verifiable way to determine whether you are communicating with a human or a program. Not only should we want to be able to verify the authenticity of any persona we’re engaging with, but we also want to be able to verify the authenticity of the content they are sharing. In the full picture there are three main areas to solve for: Proof of Identity (PoI), Proof of Humanity (PoH) and Proof of Authenticity (PoA). These areas represent challenges to solve with slightly different scopes that, when addressed together, are key components to having a trustworthy internet in the age of Artificial Intelligence.

Note: PoH here is not identical to the PoH Blockchain efforts that exist. While blockchain is a potential technology which could power such capabilities it isn’t the only one.

Areas to Solve

As mentioned above, there are 3 key areas to solve for. This post will not be explaining full solutions for each but rather explain what is intended in each scope. Each of these are deserving of follow-up posts to go into more detail.

  1. Proof of Identity (PoI) - Refers to a mechanism to cross reference the identity of any particular persona on the internet to a verifiable source of truth. Examples could include being able to verify that the CNN Threads account is owned and operated by a designated representative of the CNN Media Network or that the account xgdpx is owned/operated by me (Geoff Pamerleau).

    The main goal in solving PoI is to address impersonation and spam/disinformation.

  2. Proof of Humanity (PoH) - Refers to a mechanism to assert and prove that a verified identity represents and is operated by a singular individual whom exists in the real world (IE is not a bot). Continuing the previous example, the CNN Identity would not be able to pass the test of Proof of Humanity while the xgdpx Threads account should as there is a 1-to-1 mapping of the xgdpx Identity to Geoff Pamerleau.

    The main goal of PoH is to address spam and disinformation performed by bad actors (both manual and automated)

  3. Proof of Authenticity (PoA) - Refers to a mechanism to confirm or assert the veracity of content shared by personas. This is probably one of the hardest problems with holistic solutions needed to address content in aggregate and specific solutions potentially needed for different content types (Images vs Audio vs Video vs Text etc)

Parting Thoughts

If we can develop platform agnostic ways of addressing these issues, then we can improve the quality of discourse on the internet and cut out potential classes of bad activity. It isn’t necessary to always solve for all the things - IE a satirical post or article need not pass the ‘Proof of Authenticity’ test as it isn’t intended to be a ‘factual’ piece of information. I also acknowledge that people or groups already engaging in bad faith behavior will probably just ignore or assert conspiracy theories about these types of technologies if they become common place. These mechanisms are meant to help and protect the vast majority of people who DO want to quickly know if a particular persona or content is factual and/or trustworthy.

Until next time,

-Sy14r (aka Geoff Pamerleau)