This specification defines a data model for an author to express changes to data over time and the means for a verifier to cryptographically verify those changes.
An experimental thought exercise, for now. Potentially, a data structure for cryptographic event logging of data objects such as DID Documents, ActivityPub objects, and other data formats.
In decentralized systems, coordination relies on a shared view of reality to make decisions. This shared reality can include various types of information, such as the balance of a financial account, the current holder of cryptographic material (like a public key), the location of a physical asset, or the sequence of comments in an online forum. When this information is expressed digitally, it is important to understand the order of events that led to the current state. Additionally, it is crucial for systems to be able to share this information in a decentralized way, with guarantees that the event log has not been tampered with over time.
One approach to securing a cryptographic log of information is to establish a cryptographic key to be associated with a particular piece of data and then only trust changes to that information where each change is digitally signed by the controller of the cryptographic key. One challenge with this approach is that the controller might modify the sequence of events at any point, effectively rewriting history, if it provides an advantage for them in a decentralized system.
To prevent rewriting history, some decentralized systems have turned to centralized solutions to "witness" the log of events that change the underlying data. Centralized solutions tend to create problems with the power dynamics in an ecosystem and, unsurprisingly, centralize previously decentralized solutions.
Other decentralized systems have turned to solutions such as decentralized ledger technologies (also known as blockchains) to share the state of a system. Some of these decentralized solutions tend to create their own problems such as energy use that some view as excessive (proof of work) or an excessive centralization of capital (proof of stake). Decentralized solutions also tend to make some governments nervous and slow adoption due to the uncertainty associated with the political dynamic of the underlying cryptocurrency.
This specification defines a decentralized cryptographic event log format where a verifier depends on the controller of a particular piece of data, as well as external witnesses that they trust, to establish trust in the current state of a particular object in a decentralized system.
A reader of this specification might presume that the authors are not aware of the 30+ years of prior art around time stamping and cryptographic log services such as Web Ledger, opentimestamps.org, cryptographically signing a transaction log, blockchains and decentralized ledgers, Tahoe LAFS, did:peer, Trusted DID Web's log, did:dht, IPFS, and other technologies that provide technologies similar to what this specification provides. We probably need to have a section highlighting all the prior art in the space and what sets this specification apart. Readers of this specification are urged to provide citations to prior art that they feel apply.
This specification satisfies the following design goals:
The following use cases have been identified as ones that this specification is capable of addressing.
A conforming log is any [=byte sequence=] that can be converted to a JSON document that follows the relevant normative requirements in Sections [[[#data-model]]].
A conforming [=application specification=] is any specification that follows the relevant normative requirements in Section [[[#application-specifications]]].
A conforming processor is any algorithm realized as software and/or hardware that generates and/or consumes a [=conforming log=] according to the relevant normative statements in Section [[[#algorithms]]]. Conforming processors MUST produce errors when non-conforming documents are consumed.
This section defines the data model for expressing protected events that enable an author to express changes to a specific instance of data over time in a way that is tamper-evident and that can be cryptographically authenticated by verifiers in a decentralized manner.
A cryptographic event log contains a [=list=] of witnessed events. The log's basic structure is shown below:
{ "log": [{ "event": {...}, // the first event in the event log "proof": [...] // a list of witness proofs securing the event }, { "event": {...}, // the second event "proof": [...] // a list of witness proofs securing the second event }, { "event": {...}, // the third event "proof": [...] // a list of witness proofs securing the third event }] }
An [=event log=] MUST conform to the following format:
Property | Description | ||||||
---|---|---|---|---|---|---|---|
log |
A REQUIRED property whose value is a [=list=] of one or more entries that
conform to the following format:
|
||||||
previousLog | An OPTIONAL property whose value MUST conform to an external reference as defined in Section [[[#external-reference]]]. The value MUST also contain a `proof` property whose value conforms to a [=data integrity proof=] as defined by the [[VC-DATA-INTEGRITY]] specification. [=Conforming processors=] MUST support at least the `ecdsa-jcs-2019` cryptosuite as defined by the [[[VC-DI-ECDSA]]] specification. Event logs MUST have a default maximum size of 10MB which can be overridden by [=application specification=]s and SHOULD be at least 1MB in size before creating a new chunk. |
To support chunking, the `previousLog` property is provided to ensure that arbitrarily long change histories are supported.
{ "previousLog" : { // URLs that can be used to retrieve the previous event log "url": [ "https://website.example/log123.cel", "3jxop4cs2lu5ur2...sseqdsp3k262uwy4.onion/log123.cel", "ipfs://QmCQFJGkARStJbTy8w65yBXgyfG2ZBg5TrfB2hPjrDQH3R" ], // The media type associated with the previous log "mediaType": "application/cel", // a cryptographic digest of the previous log (sha2-256 base64-url-nopad) "digestMultibase": "uEiwLZcLK9EBguM5WZlbAgfXBDuQAkoYyQ6YVtUmER8pN24" "proof": [...] // a list of witness proofs securing the previous event log }, "log": [{ "event": {...}, // the first event in the event log "proof": [...] // a list of witness proofs securing the event }, { "event": {...}, // the second event "proof": [...] // a list of witness proofs securing the second event }, { "event": {...}, // the third event "proof": [...] // a list of witness proofs securing the third event }] }
An external reference points to data outside of the event log in a way that is cryptographically verifiable. The reference can provide a list of URLs where the data can be retrieved, a media type to use when performing the data retrieval, and a cryptographic hash of the data.
Property | Description |
---|---|
url | An OPTIONAL property whose value MUST be a [=list=] of one or more URLs that can be used to retrieve the data being secured. If no `url` value is specified, the mechanism used to retrieve the data is application-specific. |
mediaType | An OPTIONAL property whose value MUST be a media type as defined by the [[[RFC6838]]]. |
digestMultibase | A REQUIRED property whose value MUST be a Multibase-encoded (base64-url-nopad) Multihash (sha2-256) value. |
{ // URLs that can be used to retrieve the data "url": [ "https://website.example/file.dat", "cs2lu5ur23jxop4...p3k262uwy4sseqds.onion/file.dat", "ipfs://QmJbTy8w65yBXgyCQFJGkARStfB2hPjrDQH3RG2ZBg5Trf" ], // The media type of the file "mediaType": "application/octet-stream", // a cryptographic digest of the file (sha2-256 base64-url-nopad) "digestMultibase": "uEQ6YVtUmER8pN24iwLZcLK9EBguM5WZlbAgfXBDuQAkoYy" }
A cryptographic event log entry is used to establish an event that can be independently authenticated by a verifier. The [=log entry=] MUST be composed of at least a single `event` and one or more associated `proof` values that can be used to verify the validity of the event.
Property | Description | ||||||||
---|---|---|---|---|---|---|---|---|---|
operation |
A REQUIRED property that captures an operation that was observed during the
event. The `operation` value MUST include either a `data` property or
a `dataReference` property and MUST conform to the following data format:
|
||||||||
previousEvent | If `type` is not set to `create`, this property is REQUIRED and its [=string=] value MUST be a Multibase-encoded (base64-url-nopad) Multihash (sha2-256) of the event immediately preceding this event. |
The example below shows the first [=log entry=] containing a `create` operation:
{ "log": [{ "event": { // the cryptographic hash of the previous event "previousEvent": "uEBguM5WZlbAgfXBDuQiAkoYyQ6YVtUmER8pN24wLZcLK9E", "operation": { // the type of operation observed by the event "type": "create", // the data associated with the event (in JSON format) "data": { "name": "Hello World!", // one or more proofs that secure the integrity of the event above "proof": { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-29T13:56:28Z", // the cryptographic authority for the data is application specific // in this example, it is established by the proof in the create operation "verificationMethod": "https://website.example/crypto#key-1", "proofPurpose": "assertionMethod", "proofValue": "zq6PrUMCtqY5obCSsrQxuFJd...wVWgHYrXxoV93gBHqGDBtQLPFxpZxz" } } } } }] }
The example below shows the second [=log entry=] containing an `update` operation that refers to the previous example via the `previousEvent` property:
{ "log": [{ ... // first event log entry }, { "event": { // the cryptographic hash of the previous event "previousEvent": "uEBguM5WZlbAgfXBDuQiAkoYyQ6YVtUmER8pN24wLZcLK9E", "operation": { // the type of operation is an "update" "type": "update", "data": { // the "name" value is changed in the update "name": "Updated Hello World!", "proof": { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-30T11:29:11Z", "verificationMethod": "https://website.example/crypto#key-1", "proofPurpose": "assertionMethod", "proofValue": "zqY5obCSsrQxuFJdq6PrUMCt...93gBHqGDBtQLPFxpZxzwVWgHYrXxoV" } } } } }] }
Some data are too large to include in the event log itself. For these cases, it is possible to refer to data outside of the event log and digitally sign a cryptographic hash of the external data. It is also possible to provide multiple URLs for a verifier to use when retrieving the external data.
{ "log": [{ "event": { "operation": { "type": "create", // a reference to the data associated with the event "dataReference": { // URLs that can be used to retrieve the data "url": [ "https://website.example/file.dat", "3jxop4cs2lu5ur2...sseqdsp3k262uwy4.onion/file.dat", "ipfs://QmTy8w65yBXgyfG2ZBg5TrfB2hPjrDQH3RCQFJGkARStJb" ], // The media type of the associated data "mediaType": "application/octet-stream", // a cryptographic digest of the data (sha2-256 base64-url-nopad) "digestMultibase": "uEiAkoYyQ6YVtUmER8pN24wLZcLK9EBguM5WZlbAgfXBDuQ" } } } }] }
An event witness is a service that can attest to the existence of data at a particular point in time. These services are trusted by the verifier to attest to the existence of data at a particular point in time by digitally signing a cryptographic hash that is provided to them such that they do not see the data, but confirm the existence of the data.
The [=event witness=] performs their function by providing a [=data integrity proof=] for a particular event digest. The log controller then appends the proof onto the array of `proof` values.
{ "log": [{ "event": { "operation": { "type": "create", // a reference to the data associated with the event "dataReference": { // URLs that can be used to retrieve the data "url": [ "https://website.example/file.dat", "3jxop4cs2lu5ur2...sseqdsp3k262uwy4.onion/file.dat", "ipfs://QmTy8w65yBXgyfG2ZBg5TrfB2hPjrDQH3RCQFJGkARStJb" ], // The media type of the associated data "mediaType": "application/octet-stream", // a cryptographic digest of the data (sha2-256 base64-url-nopad) "digestMultibase": "uEiAkoYyQ6YVtUmER8pN24wLZcLK9EBguM5WZlbAgfXBDuQ" } } }, // one or more proofs that witness the event above "proof": [{ "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-29T13:56:45Z", "verificationMethod": "https://witness.example/attestation#key-P8", "proofPurpose": "assertionMethod", "proofValue": "zJdq6PrUMCtqY5obCSsrQxuF...tQLPFxpZxzwVWgHYrXxoV93gBHqGDB" }] }] }
Event logs that contain all data necessary to reconstruct the current state of data from an event log can be verbose, especially when many event witnesses are used. For these use cases, a compact format is provided that can be further compressed into a minimal binary CBOR stream. A minimum viable [=cryptographic event log=] is shown below:
{ "log": [{ "event": { "operation": { "type": "create", "dataReference": "uEiBfhmMyElIQPrulFu-5ETYVLgzyvoPsmxTMpEds7iQPBw" } }, "proof": [ "uEiAJEdA64FgSCB6yPqXIRU_x8hg5wcEwwvT-EfHxm44lCA", "uEiD4MvIoKbDn7yHUl_G4ivWNtQZ6v8p2tfDvcRBJ2NQTVw", "uEiDlAVvCcKUxn9XeLMOzDoJyLevPfX91pyi1Ry5l92pHsw" ] }, { "event": { "previousEvent": "uEcusg0U2pL9sdsusEuMCdQiCLcgmToC-2b9qCWNHZt3ouO", "operation": { "type": "update", "dataReference": "uEiCLcgmToC-2b9qCWNHZt3ouOcusg0U2pL9sdsusEuMCdQ" } }, "proof": [ "uEiCsH8n2phXA-tYy_bcTpuAMkWEDyiZLZyqv1PwehHCqsA", "uEiDXyzlypk757_ZTKxRZINMUDZQPOInYcj6k_vj3ro--bw", "uEiCqA7hqFj5THapravk_wh_ViKu6h8BC0uqhKoWTERLBZA" ] }] }
The event log above (651 bytes) can be compressed to CBOR (350 bytes; 53% compression), where each [=log entry=] has an overhead of roughly 19 bytes (12% overhead). The event log above is shown below in CBOR Diagnostic notation format:
A1 # map(1) 20 # log 82 # array(2) A2 # map(2) 21 # event A1 # map(1) 22 # negative(2) A2 # map(2) 23 # type 38 63 # create 24 # dataReference 58 22 # bytes(34) 12205F8663321252103EBBA516EFB91136152E0CF2BE83EC9B14CCA4476CEE240F07 26 # proof 83 # array(3) 58 22 # bytes(34) 12200911D03AE05812081EB23EA5C8454FF1F21839C1C130C2F4FE11F1F19B8E2508 58 22 # bytes(34) 1220F832F22829B0E7EF21D497F1B88AF58DB5067ABFCA76B5F0EF711049D8D41357 58 22 # bytes(34) 1220E5015BC270A5319FD5DE2CC3B30E82722DEBCF7D7F75A728B5472E65F76A47B3 A2 # map(2) 21 # negative(1) A2 # map(2) 25 # previousEvent 58 22 # bytes(34) 11CBAC834536A4BF6C76CBAC12E3027508822DC8264E80BED9BF6A09634766DDE8B8 22 # negative(2) A2 # map(2) 23 # type 38 64 # update 24 # dataReference 58 22 # bytes(34) 12208B720993A02FB66FDA8258D1D9B77A2E39CBAC834536A4BF6C76CBAC12E30275 26 # proof 83 # array(3) 58 22 # bytes(34) 1220AC1FC9F6A615C0FAD632FDB713A6E00C916103CA264B672AAFD4FC1E8470AAB0 58 22 # bytes(34) 1220D7CB3972A64EF9EFF6532B145920D3140D940F3889D8723EA4FEF8F7AE8FBE6F 58 22 # bytes(34) 1220AA03B86A163E531DAA6B6AF93FC21FD588ABBA87C042D2EAA12A85931112C164
The data model described in this specification can be serialized as JSON or CBOR.
If we do not provide a CBOR serialization, some subset of the developer
community will complain. The only argument for a CBOR serialization is
ensuring a byte-optimized storage format. We could make CBOR the default
serialization, but at the cost of alienating developers that don't work in
CBOR (which is most of them). The only time size restrictions come into
play is with large event logs (or large binaries in event logs), which are a
possibility. We're going to start out with JSON, see how far it gets us, and
see if the pro-CBOR community would be willing to trade CPU cycles to
convert to/from CBOR from JSON (ensuring a stable round-tripping). The number
of fields we'd have to pick CBOR values for is minimal.
An approach for a bi-directional mapping is provided in Section
[[[#minimizing-event-logs]]].
The default representation for a [=cryptographic event log=] is JSON. That
might change if we get enough developer feedback to use CBOR.
An approach for a bi-directional mapping is provided in Section
[[[#minimizing-event-logs]]].
The default representation for a [=cryptographic event log=] is JSON. That
might change if we get enough developer feedback to use CBOR. The alternative
is to provide a bi-directional mapping to CBOR which reduces storage
requirements.
An approach for a bi-directional mapping is provided in Section
[[[#minimizing-event-logs]]].
The algorithms described in this section outline the procedures used to ensure the integrity and security of the event log. These algorithms provide the means for securely recording, verifying, and sharing changes to data in a decentralized system. By leveraging cryptographic techniques, the algorithms ensure that the event log is tamper-proof and verifiable, allowing parties to independently confirm the authenticity of the data. The following subsections detail the specific algorithms used for creating event entries, verifying their integrity, and ensuring consistency across systems.
The following algorithm defines the process for creating an event log.
The following algorithm defines the process for creating an event log entry. The event log serves as a record of changes to data within a system, capturing key information such as the type of event, the associated cryptographic hash of the protected data, and a proof that secures both. The algorithm ensures that each event is securely logged and that the integrity of the data is maintained over time. By following this process, systems can reliably document and share events, providing verifiable evidence of changes while maintaining decentralization and security.
Required inputs are an [=event log=] ([=map=] inputEventLog), and [=log entry=] ([=map=] |inputEvent|) and a set of options ([=map=] |options|). An [=event log=] ([=map=]), or an error, is produced as output.
The following algorithm defines the process for witnessing an event log entry. A witness for an event log entry does so by digitally signing a cryptographic hash instead of viewing the data directly. This provides privacy for the author and eliminates liability for the witness. Effectively, the signed value states: "I witness cryptographic has X at point in time Y.", which establishes that the data existed at that particular point in time. Witness services are expected to be external to the system requesting the witnessing, such as through an HTTP endpoint that accepts a cryptographic hash as input.
This specification should probably define the witness HTTP endpoint function call. The endpoint would take in a `digestMultibase` value and produce a valid [=data integrity proof=].
Required inputs are an [=log entry=] ([=map=] |inputEvent|) and a set of options ([=map=] |options|). An [=log entry=] ([=map=]), or an error, is produced as output.
This section outlines the algorithm for updating an existing event log entry. As events occur and data changes, it is essential to ensure that the event log remains accurate, up-to-date, and secure. The update process involves appending new event entries while preserving the integrity of previous entries through cryptographic validation. This algorithm ensures that each update is properly recorded and can be verified independently, allowing systems to maintain an up-to-date, tamper-proof record of all changes.
Different applications will have different ways in which they want to construct how the data entries relate to one another in a history of events. Some applications will prefer replacement semantics, where the entire object is provided in a self-contained manner for every change. Other applications will prefer patching semantics, where a patch set is provided as an update instead of the entire object. The state machine that applies event entries to the data object over time is specified by the [=application specification=] and not this specification.
This section describes the algorithm for deactivating an event log entry when it is no longer relevant or needed. Deactivation marks the point at which an event log is no longer active, ensuring that no further changes can be made to the entry while maintaining the integrity of the data already recorded. The algorithm ensures that the deactivation process is secure, verifiable, and irreversible, allowing systems to formally close out event logs without compromising the authenticity of prior entries.
This section outlines the algorithm for verifying the integrity and authenticity of an event log. The process ensures that the event log has not been tampered with and that each entry is consistent with the expected cryptographic proofs. By using cryptographic signatures and validation techniques, the algorithm allows verifiers to independently confirm that the recorded events are genuine and have not been altered over time. This verification process is essential for maintaining trust and accountability in decentralized systems, where participants rely on accurate records that have not been tampered with over time.
This section contains sample [=application specification=]s and how different ecosystems would use this specification to achieve cryptographic event logs for the applications in their ecosystem.
A cryptographic event log application specification is a document that builds upon this specification and MUST define at least three algorithms:
[=Application specifications=] MAY also define other things such as how to discover data from a `digestMultibase` value, protocols for interacting with witnesses, and alternative proof and serialization mechanisms.
The example in this section shows how changes to a DID Document could be performed in a way that is fully decentralized without requiring a blockchain. Witnesses are selected based on which ones verifiers might trust the most without introducing a cold start problem. One can start a network with a single witness and proceed from there as only a single witness is required to be trusted by a verifier for the change entry to be seen as valid.
The application-specific aspects of this particular extension require the initial DID Document to be hashed without the value of the DID being placed in the DID Document. Once the initial `data` value is hashed, that becomes the DID for all subsequent DID Documents which creates a stable identifier for the controller of the DID to use.
Each event log entry needs to be signed by a verification method that is trusted to perform assertions, which is listed in the initial log entry. Future log entries can rotate cryptographic keys by using a key that is trusted to make assertions from the previous log entry. Future changes use the newly rotated key. Witness signatures are provided by two nations, the red nation and the blue nation, who are presumed to be mutually distrusting of one another, but both provide witnessing services for their citizens. This demonstrates that mutually distrusting nations can provide signatures on each other's citizen data without requiring prior authorization. If nations cannot bring themselves to run such services, private industry will likely also operate witnessing services.
{ "log": [{ "event": { "operation": { "type": "create", "data": { // the DID is established by cryptographically digesting this object // before the proof property is added "@context": "https://www.w3.org/ns/did/v1.1", "id": "did:example:", "verificationMethod": [{ "id": "#key-1", "type": "Multikey", "controller": "did:example", "publicKeyMultibase": "zDnaerx9CtbPJ1q36T5Ln5wYt3MQYeGRG5ehnPAmxcf5mDZpv" }], "authentication": ["#key-1"], "assertionMethod": ["#key-1"], "capabilityDelegation": [], "capabilityInvocation": [] // this proof establishes the controller of the DID Document "proof": { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-29T13:56:28Z", "verificationMethod": "did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv#key-1", "proofPurpose": "assertionMethod", "proofValue": "z5obCSsrQxuFJdq6PrUMCtqY...93gBHqGDBtQLPFxpZxzwVWgHYrXxoV" } } } }, // these proofs establish the witnesses on the creation of the DID Document "proof": [{ "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-29T13:56:45Z", "verificationMethod": "https://rednation.example/attestations#key-5W", "proofPurpose": "assertionMethod", "proofValue": "zJdq6PrUMCtqY5obCSsrQxuF...tQLPFxpZxzwVWgHYrXxoV93gBHqGDB" }, { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-29T13:56:49Z", "verificationMethod": "https://bluenation.example/witnesses#key-Z2", "proofPurpose": "assertionMethod", "proofValue": "ztqY5obCSsrQxuFJdq6PrUMC...zwVWgHYrXxoV93gBHqGDBtQLPFxpZx" }] }, { "event": { "previousEvent": "uEkoYyQ6YVtUmER8pN24wLZcLK9EBguM5WZlbAgfXBDuQiA" "operation": { "type": "update", "data": { "@context": "https://www.w3.org/ns/did/v1.1", "id": "did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv", "verificationMethod": [{ "id": "did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv#key-1", "type": "Multikey", "controller": "did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv", "publicKeyMultibase": "zDnaerx9CtbPJ1q36T5Ln5wYt3MQYeGRG5ehnPAmxcf5mDZpv" }, { "id": "did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv#key-2", "type": "Multikey", "controller": "did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv", "publicKeyMultibase": "z6Mkf5rGMoatrSj1f4CyvuHBeXJELe9RPdzo2PKGNCKVtZxP" }], "authentication": ["did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv#key-1"], "assertionMethod": ["did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv#key-2"], "capabilityDelegation": ["did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv#key-2"], "capabilityInvocation": ["did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv#key-2"] // this proof confirms that the controller approves of the DID Document update "proof": { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-30T17:03:42Z", "verificationMethod": "did:example:zQmQoeG7u6XBtdXoek5p3aPoTjaSRemHAKrMcY2Hcjpe3jv#key-1", "proofPurpose": "assertionMethod", "proofValue": "z5obCSsrQxuFJdq6PrUMCtqY...93gBHqGDBtQLPFxpZxzwVWgHYrXxoV" } } } }, // these proofs establish the witnesses on the DID Document update "proof": [{ "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-30T17:03:49Z", "verificationMethod": "https://rednation.example/attestations#key-5W", "proofPurpose": "assertionMethod", "proofValue": "zJdq6PrUMCtqY5obCSsrQxuF...tQLPFxpZxzwVWgHYrXxoV93gBHqGDB" }, { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2024-11-30T17:03:57Z", "verificationMethod": "https://bluenation.example/witnesses#key-Z2", "proofPurpose": "assertionMethod", "proofValue": "ztqY5obCSsrQxuFJdq6PrUMC...zwVWgHYrXxoV93gBHqGDBtQLPFxpZx" }] }] }
The example in this section demonstrates the creation of an ActivityPub Note, which is then edited and updated in a way that can be confirmed by independent verifiers.
The application-specific extension to this specification establishes the cryptographic keys that are trusted to sign cryptographic logs by using the `actor` property (`https://personal.example/mallory`).
The initial Note contains a message that says "I'll be there at 5pm", and the subsequent update changes that message to "I'll be there at 6pm".
Both the initial note and the subsequent update are witnessed by the red and blue social network servers, which are presumed to be run by different organizations. By providing witnessing services, both social networks can support the trustworthiness of events as they occur across the fediverse without the need for centralized time-stamping services for either network.
{ "log": [{ "event": { "operation": { "type": "create", "data": { "@context": "https://www.w3.org/ns/activitystreams", "type": "Create", "id": "https://personal.example/mallory/87374", "actor": "https://personal.example/mallory", "object": { "id": "https://website.example/mallory/note/72", "type": "Note", "attributedTo": "https://personal.example/mallory", "content": "I'll be there at 5pm", "published": "2025-02-10T15:04:55Z", "to": ["https://example.org/~john/"], "cc": ["https://website.example/~erik/followers", "https://www.w3.org/ns/activitystreams#Public"] }, "published": "2025-02-10T15:04:55Z", "to": ["https://example.org/~john/"], "cc": ["https://website.example/~erik/followers", "https://www.w3.org/ns/activitystreams#Public"] "proof": { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2025-02-10T15:04:55Z", "verificationMethod": "https://personal.example/mallory#key-1", "proofPurpose": "assertionMethod", "proofValue": "z5obCSsrQxuFJdq6PrUMCtqY...93gBHqGDBtQLPFxpZxzwVWgHYrXxoV" } }, }, }, "proof": [{ "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2025-02-10T15:05:02Z", "verificationMethod": "https://redsocial.example/attestations#key-5W", "proofPurpose": "assertionMethod", "proofValue": "zJdq6PrUMCtqY5obCSsrQxuF...tQLPFxpZxzwVWgHYrXxoV93gBHqGDB" }, { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2025-02-10T15:05:08Z", "verificationMethod": "https://bluesocial.example/witnesses#key-Z2", "proofPurpose": "assertionMethod", "proofValue": "ztqY5obCSsrQxuFJdq6PrUMC...zwVWgHYrXxoV93gBHqGDBtQLPFxpZx" }] }, { "event": { "previousEvent": "uEfXBDuQiAkoYyQ6YVtUmER8pN24wLZcLK9EBguM5WZlbAg" "operation": { "type": "update", "data": { "@context": "https://www.w3.org/ns/activitystreams", "type": "Update", "object": { "id": "https://website.example/mallory/note/72", "content": "I'll be there at 6pm", "published": "2025-02-10T15:07:15Z", } "proof": { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2025-02-10T15:07:15Z", "verificationMethod": "https://personal.example/mallory#key-1", "proofPurpose": "assertionMethod", "proofValue": "z5obCSsrQxuFJdq6PrUMCtqY...93gBHqGDBtQLPFxpZxzwVWgHYrXxoV" } }, }, }, "proof": [{ "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2025-02-10T15:07:19Z", "verificationMethod": "https://redsocial.example/attestations#key-5W", "proofPurpose": "assertionMethod", "proofValue": "zJdq6PrUMCtqY5obCSsrQxuF...tQLPFxpZxzwVWgHYrXxoV93gBHqGDB" }, { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-jcs-2019", "created": "2025-02-10T15:07:26Z", "verificationMethod": "https://bluesocial.example/witnesses#key-Z2", "proofPurpose": "assertionMethod", "proofValue": "ztqY5obCSsrQxuFJdq6PrUMC...zwVWgHYrXxoV93gBHqGDBtQLPFxpZx" }] }] }
This specification is designed to address the following threat model.
Threat: The data controller decides to rewrite history to give themselves an advantage in the ecosystem.
Mitigation: Verifiers that have seen the most recent event log, before a fork, will be able to detect forks. Verifiers that have never seen the most recent event log, and have no way of retrieving it, will not be able to detect the fork in history.
Threat: An attacker steals a cryptographic key used by the data controller to remove the controller's keys from change control.
Mitigation: The registration of recovery keys could allow recovery from this sort of compromise.
Threat: A quantum computer breaks elliptic curve cryptography.
Mitigation: A post-quantum signature is provided as a witness signature before elliptic curve cryptography is broken.
Threat: An attacker attempts to inject an event into the event log.
Mitigation: The `previousEvent` hash, and digital signatures on each operation, prevents this particular attack.
Threat: An attacker attempts to reorder or duplicate existing events in the event log.
Mitigation: The `previousEvent` hash, and digital signatures on each operation, prevents this particular attack.
Threat: A once good witness goes rogue and refuses to witness an event.
Mitigation: Since trust is based on context, verifiers can choose when to trust a witness and when to switch trust to a different set of witnesses. It is presumed that most systems will utilize multiple witnesses with different ecosystem goals.
Threat: A witness disappears and can no longer be reached nor can their public cryptographic material be accessed.
Mitigation: If the witness was popular, a cryptographic log of the witness keys most likely exists at a verifier. If not, the ecosystem depends on multiple witnesses so that the failure of any subset does not result in a non-functional system.