[=Decentralized Identifiers=] (DIDs) are a type of identifier for verifiable, decentralized digital identity. These identifiers are designed to enable the controller of a DID to prove control over the identifier in a way that is independent of any centralized registry, identity provider, or certificate authority. These sorts of identifiers often utilize a heavy-weight registry, such as ones utilizing Decentralized Ledger Technologies (DLT), to create, read, update, and deactivate [=DIDs=]. This specification describes a [=witness=]-based [=DID Method=] where each DID Document's history is contained in a [=cryptographic event log=] where each event is [=witnessed=] by one or more trusted entities. The approach avoids the need for complex decentralized consensus algorithms as well as expensive proof-of-work or proof-of-stake systems.
In today's digital world, your online identity is controlled by companies and organizations that maintain databases of usernames, passwords, and personal information. When you create an account with a service, that service owns your identity—they can lock you out, change the rules, or even shut down entirely, taking your identity and connections with it. The `did:cel` method offers a fundamentally different approach: it enables you to create and control your own digital identifier without relying on any single company, organization, or centralized database. Your [=DID=] belongs to you alone, stored and managed using cryptographic techniques that prove ownership without needing anyone's permission or approval.
What makes `did:cel` truly decentralized is its [=witness=]-based architecture, which eliminates any single point of control or failure. Unlike traditional systems where a company's servers must be online for you to prove who you are, or blockchain-based systems where you depend on a specific network to function, `did:cel` distributes trust across multiple independent [=witnesses=] that attest to changes in your identity information. These [=witnesses=] don't control your identity—they simply provide timestamped confirmations that certain changes occurred. You choose which [=witnesses=] to use, and because no single [=witness=] has special authority, no one entity can block your access, censor your identity, or force unwanted changes. This architectural choice means your identity remains under your control even if individual [=witnesses=] become unavailable or act maliciously.
This decentralization directly empowers individuals to own their identity and data online in meaningful ways. With `did:cel`, you can prove who you are and what credentials or permissions you hold without asking permission from gatekeepers. You can take your identity with you across different services and platforms, establishing trust relationships on your own terms. If a service you use shuts down or changes its policies in ways you disagree with, your underlying identity remains intact and under your control—you simply use it with a different service. This represents a fundamental shift from identity as something granted by institutions to identity as an inherent digital right that you exercise through cryptographic proofs. When you control your identifier, you control your digital presence, your relationships, and your data.
This specification describes the complete technical framework for the `did:cel` method. The Identifier Syntax section explains how [=DIDs=] are constructed using cryptographic hashes. The Operations section details how to create, update, read, [=witness=], and deactivate [=DID documents=], including the [=cryptographic event log=] that maintains the complete history of changes. The Privacy Considerations section examines privacy implications of maintaining public event logs and offers mitigation strategies. Finally, the Security Considerations section addresses potential security risks and provides guidance for secure implementation and deployment of the `did:cel` method.
This specification optimizes for the following design goals:
Readers might also find the Goals section in the [[[CEL]]] specification of interest.
The following focal use cases have been identified as ones that this specification is capable of addressing.
Readers might also find the Use Cases and Requirements section in the [[[CEL]]] specification of interest.
The interaction flow between [=DID controllers=], [=witness=] services, storage services, and [=verifiers=] follows a standard process. When a [=DID controller=] creates or updates a [=DID document=], they generate a signed event and append it to their [=cryptographic event log=], creating a [=hashlink=] to the previous event. The controller then sends a cryptographic hash of this event to their chosen [=witness=] services, which each generate a [=data integrity proof=] attesting that they saw the cryptographic hash at a particular time, and then return the [=witness=] proof to the controller. The controller collects these [=witness=] proofs, attaches them to the event in the log, and publishes the complete [=cryptographic event log=] to one or more storage services.
When a [=verifier=] needs to validate a [=DID=], the controller provides the [=DID=] and a storage location that can be used to retrieve the [=cryptographic event log=] from the storage services. The [=verifier=] then downloads the log and verifies the [=hashlinked=] chain integrity. It does this by checking that operation proofs are signed by authorized keys from the [=DID document=], the [=hashlinks=] are correct, the operations performed are valid, and ensures that [=witness=] proofs meet their requirements. This separation of concerns between controllers, [=witnesses=], storage, and verification enables a fully decentralized system where no single party has unilateral control.
Some terminology used throughout this document is defined in the Terminology section of the [[[CID]]] specification, the Terminology section of the [[[DID]]] specification, and the Terminology section of the [[[VC-DATA-INTEGRITY]]] specification. This section defines additional terms used throughout this specification.
The format for the `did:cel` method conforms to the [[[DID]]] specification and is simple. It consists of the `did:cel` prefix, followed by a Multibase base58-btc encoded value that is a concatenation of the Multihash identifier and the corresponding cryptographic digest for the initial [=cryptographic event log=] entry.
The ABNF for the key format is described below:
did-cel-format := did:cel:<mb-value>
mb-value := z[a-km-zA-HJ-NP-Z1-9]+
Alternatively, the encoding rules can also be thought of as the application of a series of transformation functions on the raw public key bytes:
did-cel-identifier := did:cel:MULTIBASE(base58-btc, MULTICODEC(sha3-256, JCS(initial-event-log-entry)))A simple example of a valid `did:cel` DID is:
did:cel:zW1jPC3ViLfgPJX6KaPMhymin3LpATUgYTS7N58FLHtQ4HE
[=Witness=] services are independent entities that provide cryptographic attestations for events in [=cryptographic event logs=], serving as a cornerstone of the `did:cel` method's decentralized architecture. Unlike centralized timestamp authorities or blockchain validators, [=witnesses=] operate autonomously without controlling or storing the [=DIDs=] they attest to, creating [=data integrity proofs=] that confirm events existed at specific points in time. This [=witness=]-based approach enables distributed trust without requiring expensive consensus mechanisms, heavy-weight infrastructure, or dependency on any single service provider. [=DID controllers=] select which [=witnesses=] to use for attestations, and because [=witnesses=] are independent and interchangeable, the system remains resilient even if individual [=witnesses=] become unavailable, compromised, or act maliciously.
[=Witnesses=] fulfill several critical roles in the `did:cel` ecosystem. They provide temporal anchoring by generating timestamped cryptographic proofs that establish when events occurred, creating verifiable evidence that a [=DID controller=] performed an operation at a specific point in time. They enable distributed validation by offering independent third-party attestation, preventing any single entity from having unilateral control over the validity of [=DID=] operations. [=Witnesses=] also increase resistance to tampering and fraud by creating multiple independent records of events that would need to be simultaneously compromised to forge a fraudulent history. Importantly, [=witnesses=] do not validate the semantic correctness of [=DID document=] changes or make authorization decisions—they simply attest that an event with specific cryptographic hash existed at the time of [=witnessing=]. This limited scope keeps [=witness=] operations simple, efficient, and resistant to coercion and censorship, as [=witnesses=] cannot be compelled to make subjective judgments about the legitimacy of [=DID=] operations.
Selecting the appropriate number of [=witnesses=] involves balancing security requirements against operational complexity and cost. Using more [=witnesses=] increases security by requiring attackers to compromise a larger number of independent services, but also increases the time, storage cost, network overhead, and time cost of obtaining attestations for each operation. Using too few [=witnesses=] creates risk that a single compromised or unavailable [=witness=] can block operations or enable fraud. For most use cases, a minimum of three independent [=witnesses=] from different operators and jurisdictions provides reasonable security while maintaining operational efficiency. High-security applications might require more [=witnesses=] with geographic and organizational diversity, while low-stakes applications might accept a single [=witness=] to minimize overhead.
[=DID controllers=] are advised to select [=witnesses=] operated by independent entities with diverse operational characteristics, avoiding [=witnesses=] that share infrastructure, jurisdiction, or governance that could create correlated failures. [=Verifiers=] establish their own policies about minimum [=witness=] thresholds they will accept, potentially requiring attestations from a majority or super-majority of a controller's chosen [=witnesses=] before trusting operations.
While governance and guidance around what makes an acceptable witness is out of scope for this specification, it is expected that verifier communities will publish lists of witnesses that they find acceptable to help controller's determine which ones to use to maximize trust in their [=DIDs=].
Oblivious [=witnessing=] is an important privacy and efficiency feature where [=witnesses=] attest to the hash of an event, including the [=hashlink=] to the previous event, rather than the contents of the event. When a [=DID controller=] requests [=witnessing=], they compute a cryptographic hash of the event. They send only this cryptographic hash to [=witness=] services, which create [=data integrity proofs=] over the hash without ever seeing the actual event data. The [=witness=] returns a proof attesting that it witnessed the cryptographic hash at a specific time, providing temporal evidence without requiring disclosure of potentially sensitive information in the [=DID document=] or operation.
This approach offers several benefits: it protects the privacy of controller [=DID=] operations by preventing [=witnesses=] from seeing [=DID document=] contents, reduces the bandwidth and processing requirements for [=witnessing=] by transmitting only hashes rather than full documents, enables [=witnesses=] to operate as simple, stateless services that don't need to understand [=DID=] semantics, and prevents [=witnesses=] from being coerced to make judgments about the legitimacy of [=DID=] operations since they have no visibility into what they are attesting. [=Verifiers=] can validate oblivious [=witness=] proofs by computing the same hash of the event and confirming that the [=witness=] proof covers that hash, establishing the temporal attestation without compromising the privacy benefits.
The following section outlines the DID operations for the `did:cel` method.
The create operation establishes a new `did:cel` [=DID document=] with a [=self-certifying identifier=] derived from the document's cryptographic hash. This operation generates an initial cryptographic key pair, constructs a [=DID document=] containing the public key as an assertion method, and adds a [=data integrity proof=] to the document. This approach ensures that the [=DID=] is intrinsically bound to the document's initial state, providing strong integrity guarantees without requiring external registration.
Once the [=DID document=] is created and signed, a [=cryptographic event log=] (CEL) is initialized to track the complete history of operations on this DID. The log's first entry is a `create` event that contains the signed [=DID document=] as its data payload. This event serves as the cryptographic foundation for all subsequent updates, enabling verifiable audit trails and [=witness=] attestation. The combination of the [=self-certifying identifier=], cryptographic proof, and event log creates a decentralized identifier implementation that requires no centralized authority for creation, validation, or resolution.
An example of a [=cryptographic event log=] containing the creation event for a `did:cel` [=DID=] is shown below.
The following algorithm specifies how to create a new `did:cel` DID document with an initial verification method and establish a cryptographic event log. The algorithm takes an optional [=map=] |options| as input. The |options| map MAY contain a |keyType| property specifying an cryptographic key type to use (defaults to `P-256`). Output is a [=map=] containing the generated |didDocument|, the associated |keyPair|, and the initial |cryptographicEventLog|, or an error. Whenever this algorithm encodes strings, it MUST use UTF-8 encoding.
The [=witness=] operation provides independent cryptographic attestation for events in a [=cryptographic event log=], establishing temporal evidence and distributed validation of [=DID=] operations. After a [=DID document=] is modified, [=witness=] services—independent entities operating under their own authority—generate [=data integrity proofs=] over events in the log. Each [=witness=] returns a cryptographic proof that attests the event existed and was [=witnessed=] at a specific point in time.
The [=witnessing=] process does not modify the event itself; rather, it produces a collection of cryptographic proofs that are attached to the event structure for subsequent verification. These [=witness=] attestations serve multiple purposes: they provide temporal anchoring by demonstrating when an event was [=witnessed=], enable auditability through independent third-party validation, and increase resistance to tampering by distributing trust across multiple independent [=witnesses=]. Unlike centralized timestamp authorities, the [=witness=] architecture allows [=DID controllers=] to select [=witness=] services, preventing single points of failure while maintaining cryptographic verifiability of the entire event history.
The following algorithm specifies how to generate [=witness=] attestations for the most recent event in a [=cryptographic event log=]. [=Witnesses=] are independent entities that cryptographically sign events to provide decentralized validation and auditability. The algorithm takes a [=map=] |cel| as input, which is the [=cryptographic event log=] containing one or more events. Output is an array of proof objects, one from each configured [=witness=], or an error. Whenever this algorithm encodes strings, it MUST use UTF-8 encoding.
The [=witness=] operation does not modify the [=cryptographic event log=] itself. The returned proofs are attached to the event in the log structure. [=Witnesses=] provide independent validation and temporal attestation, enabling auditability and resistance to single points of failure in the DID method infrastructure.
[=Cryptographic event logs=] can become sizeable after a few decades of usage. For example, a production-grade organizational [=DID document=] that requires key rotations every three months, with three witnesses per event, can grow to be roughly 5MB in size over those 30 years. However, the contents of the file (such as identifiers) are largely repetitive. Performing compression on the file can result in a 5MB log being reduced to roughly 600KB in size. These significant savings in both network bandwidth and storage make it such that all storage and read operations benefit from the usage of compression. Therefore, all [=cryptographic event logs=] MUST be transmitted using gzip compression.
The update operation enables modifications to an existing `did:cel` [=DID document=] while maintaining a verifiable audit trail through the [=cryptographic event log=]. When changes are made to a [=DID document=]—such as adding or removing verification methods, updating service endpoints, or setting expiration dates—the update operation ensures these modifications are cryptographically signed and recorded in the event log. The operation proceeds in two phases: first, the modified [=DID document=] receives a fresh [=data integrity proof=] signed by an authorized assertion method key; second, a new update event is appended to the [=cryptographic event log=] with a hash link to the previous event, creating an immutable chain of document history.
The [=hashlinking=] mechanism is central to the update operation's security properties. Each update event includes a `previousEventHash` property containing the SHA3-256 hash of the prior event. This cryptographic chain ensures that any tampering with historical events would be immediately detectable, as it would break the hash links throughout the chain. The combination of cryptographic proofs on the [=DID document=] and [=hashlinked=] events in the log provides both authenticity guarantees (the document was signed by an authorized key) and integrity guarantees (the complete history of changes is verifiable and tamper-evident). After performing an update, implementations typically invoke the [=witness=] operation to obtain independent temporal attestations of the modification.
An example of an update operation serialized to a [=cryptographic event log=] is shown below:
The following algorithm specifies how to update an existing `did:cel` [=DID document=] and record the change in the [=cryptographic event log=]. The update operation consists of two phases: regenerating the cryptographic proof on the modified [=DID document=], and appending a [=hashlinked=] update event to the log. The algorithm takes a [=map=] |didDocument| (the modified [=DID document=]), a [=map=] |assertionMethod| (the key pair to use for signing), and a [=map=] |cel| (the existing [=cryptographic event log=]) as input. Output is the updated |didDocument| with a new proof and the updated |cel| with the new event appended, or an error. Whenever this algorithm encodes strings, it MUST use UTF-8 encoding.
There is always a possibility that either an attacker, or a dishonest controller, will try to fork the history of the cryptographic event log. The specification is currently not specific on which protections are used for each attack. One approach could have the CEL storage services implement some rules as a part of the "registration" step, such as 1) ensure that the log history validates, 2) ensure that the log isn't being rolled back, and 3) if there is a conflict on logs between multiple CEL storage services, the longest log wins. Another approach could utilize a keep-alive approach where a specific offline key is committed to and used for updates within a certain time frame. The specification needs to detail the approach to provide adequate fork protection for a DID.
The update operation maintains the integrity of the [=cryptographic event log=] through [=hashlinking=]. Each update event includes a `previousEventHash` property containing the hash of the prior event, creating a verifiable chain that prevents tampering and enables auditing of the [=DID document=]'s complete history. After updating, implementations invoke the [=witness=] operation to obtain independent attestations of the update event.
The heartbeat operation prevents an attacker, such as a compromised [=cryptographic event log=] storage service, from truncating a cryptographic log. If the attacker attacker attempts to truncate the cryptographic log beyond the heartbeat window for a specific [=DID=], it will be interpreted as a deactivation by a [=verifier=]. [=DID documents=] created with the `did:cel` method can include a `heartbeatFrequency` property that specifies how frequently heartbeat events should be generated to prevent automatic deactivation. When the time elapsed since the last event in the [=cryptographic event log=] exceeds the `heartbeatFrequency` duration, the [=DID=] is considered deactivated by [=verifiers=] or [=cryptographic event log=] storage services. Any events, including heartbeat events, reset this timer, demonstrating that the [=DID controller=] remains in control and wishes to keep the [=DID=] active.
Unlike update operations that modify the [=DID document=], heartbeat events contain only an operation structure with a `type` of `heartbeat` and no associated data payload. This minimal structure reduces the computational cost and storage requirements for maintaining [=DID=] liveness while still providing cryptographic proof that the [=DID controller=] possesses the private keys necessary to sign events. The heartbeat event is signed using an assertion method key and [=hashlinked=] to the previous event in the log, maintaining the integrity of the event chain. After creating a heartbeat event, implementations typically invoke the [=witness=] operation to obtain independent temporal attestations, which provide evidence that the heartbeat occurred at a specific point in time and can be used to verify compliance with the `heartbeatFrequency` policy.
The following algorithm specifies how to create a heartbeat event and append it to the [=cryptographic event log=]. The heartbeat operation demonstrates continued control of the [=DID=] without modifying the [=DID document=] contents. The algorithm takes a [=map=] |assertionMethod| (the key pair to use for signing) and a [=map=] |cel| (the existing [=cryptographic event log=]) as input. Output is the updated |cel| with the heartbeat event appended, or an error. Whenever this algorithm encodes strings, it MUST use UTF-8 encoding.
After creating a heartbeat event, implementations invoke the [=witness=] operation to obtain independent temporal attestations. The timestamps in [=witness=] proofs provide evidence of when the heartbeat occurred, which can be used to verify compliance with the `heartbeatFrequency` policy specified in the [=DID document=]. [=Verifiers=] might reject [=DIDs=] where the time elapsed since the last event exceeds the `heartbeatFrequency` duration, treating such [=DIDs=] as potentially deactivated.
The deactivate operation permanently disables a `did:cel` [=DID=], signaling that the [=DID controller=] no longer wishes to use the identifier and that it is not to be trusted for any future operations. Once a deactivation event is added to the [=cryptographic event log=] and [=witnessed=], the [=DID=] enters a terminal state where no further updates, heartbeats, or other operations can be performed. Deactivation is irreversible—the [=DID=] cannot be reactivated or modified after a deactivation event is recorded. This operation is typically used when a [=DID controller=] is retiring an identifier or is migrating to a new [=DID=] and wishes to explicitly deprecate the old identifier to prevent confusion or misuse.
Similar to heartbeat operations, deactivation events contain only an operation structure with a `type` of `deactivate` and no associated data payload. The deactivation event is signed using an assertion method key from the current [=DID document=] and [=hashlinked=] to the previous event in the log, proving that the [=DID controller=] authorized the deactivation. After creating a deactivation event, implementations invoke the [=witness=] operation to obtain independent attestations of the deactivation, which provides temporal evidence and distributed validation that the deactivation occurred. [=Verifiers=] that encounter a [=cryptographic event log=] containing a deactivation event reject any future operations that use the [=DID=] after it is deactivated, treating it as permanently invalid thereafter. The complete event log, including the deactivation event, remains accessible for historical auditing and verification purposes, but the [=DID=] itself is no longer usable for authentication, assertion, or any other cryptographic operations.
The following algorithm specifies how to create a deactivation event and append it to the [=cryptographic event log=]. The deactivation operation permanently disables the [=DID=], preventing any further operations. The algorithm takes a [=map=] |assertionMethod| (the key pair to use for signing) and a [=map=] |cel| (the existing [=cryptographic event log=]) as input. Output is the updated |cel| with the deactivation event appended, or an error. Whenever this algorithm encodes strings, it MUST use UTF-8 encoding.
Deactivation is a terminal operation that cannot be reversed. Once a deactivation event is added to the [=cryptographic event log=] and [=witnessed=], the [=DID=] is permanently disabled. [=Verifiers=] that encounter a [=cryptographic event log=] containing a deactivation event reject any operations after the [=DID=] was deactivated. Implementations that allow deactivation might require additional authentication beyond the standard assertion method signature, such as multi-signature authorization or confirmation from recovery keys, to prevent accidental or malicious deactivation of active [=DIDs=]. The complete event log remains accessible for historical auditing, but the [=DID=] itself is no longer valid for any cryptographic operation beyond the deactivate timestamp.
This section contains a variety of privacy considerations that people using the `did:cel` Method are advised to consider before deploying this technology in a production setting. Readers are urged to read the Privacy Considerations section of the [[[CID]]] specification, as well as the Privacy Considerations section of the [[[DID]]] specification, before reading this section.
The [=cryptographic event log=] ([=cryptographic event log=]) for a DID is designed to be publicly verifiable, which means that the complete history of [=DID document=] operations, including creation, updates, additions, and removals of verification methods and services, is permanently recorded and accessible. This transparency enables auditability and trust but comes at the cost of revealing temporal patterns and the evolution of a DID's capabilities over time. Observers can analyze the log to determine when verification methods were added or expired, when services were introduced or removed, and how frequently the [=DID document=] has been modified.
To mitigate this privacy concern, implementers might carefully consider what information is included in [=DID documents=] and when updates are performed. Batch multiple related changes into a single update operation when possible to reduce the granularity of information exposed through temporal analysis. For use cases requiring higher privacy, consider using ephemeral [=DIDs=] that are rotated regularly, or employing [=DIDs=] with shorter active periods for their event logs. Organizations might also document their [=DID Document=] update policies to help users understand the privacy implications of using their [=DID=]-related services.
The [=cryptographic event log=] architecture relies on [=witness=] services to provide attestations for [=DID=] operations. The specific set of [=witnesses=] chosen to attest to an event can serve as a correlatable fingerprint, especially if the [=witness=] configuration is unique or rarely used. If a DID consistently uses the same set of [=witnesses=] across multiple operations, or if the [=witness=] selection pattern is distinctive, observers might be able to correlate different [=DIDs=] or activities as belonging to the same entity or organization. This correlation risk increases when custom [=witness=] services are deployed or when non-standard [=witness=] configurations are used.
To reduce correlation risk, implementers might use commonly deployed and widely adopted [=witness=] services when privacy is a concern. Using the same [=witness=] configuration as other entities in the ecosystem provides herd privacy by making it difficult to distinguish one [=DID controller=] from another based solely on [=witness=] selection. Standardizing [=witness=] selection policies across an ecosystem can also help establish common configurations that enhance privacy through ubiquity.
Each event in the [=cryptographic event log=] includes temporal information, either explicitly through timestamps in [=witness=] proofs or implicitly through the sequence and timing of operations. This temporal metadata can reveal patterns about the DID controller's activities, operational hours, time zones, or response times to security incidents. For example, if verification methods are consistently updated during specific hours, this might reveal information about the organization's business hours or geographic location. Rapid succession of updates might indicate automated processes or security incidents, while long periods of inactivity followed by bursts of activity can reveal operational patterns.
Implementers can mitigate temporal metadata leakage by implementing delays or jitter in update operations to obscure the precise timing of changes. Automated systems might avoid predictable update schedules and instead use randomized timing within acceptable windows. For sensitive operations, consider batching updates and releasing them at randomized intervals to prevent timing correlation. Organizations might also be aware that even without explicit timestamps, the sequence of events and [=witness=] attestation timing can leak temporal information, so careful consideration of when to perform DID operations is important for privacy-sensitive use cases.
The [=hashlinking=] mechanism that chains events together in the [=cryptographic event log=] provides strong integrity guarantees but also creates an immutable, permanent record of all DID operations. Once an event is added to the log, [=witnessed=], and stored, it cannot be removed or modified without breaking the cryptographic chain. This means that information about previous verification methods, services, or other metadata remains permanently visible in the log even after being removed from the current [=DID document=]. This immutability can be problematic for privacy if sensitive information was inadvertently included in earlier versions of the [=DID document=], or if the history of changes itself reveals sensitive information about the [=DID controller=]'s activities or relationships.
To address the immutability concern, implementers might exercise caution when including information in [=DID documents=], as any data added will become part of the permanent record. Before performing operations, carefully review the information being added to ensure it does not contain sensitive data that needs to remain private. For cases where the historical record becomes problematic, [=DID controllers=] can deactivate the current DID and create a new one, though this breaks continuity and requires updating all systems that reference the old DID. Alternatively, implement clear documentation and guidelines for [=DID controllers=] about what information might and might not be included in [=DID documents=] to prevent privacy issues before they occur.
A DID identifier in the `did:cel` method is derived from cryptographic material from the initial creation event, making it a persistent [=self-certifying identifier=] that is correlatable across all uses. Unlike some privacy-preserving identifier systems that allow for easy rotation or unlinkable presentations, a `did:cel` identifier remains constant and serves as a permanent correlation point. Any entity that observes the DID identifier in multiple contexts can definitively link those interactions as involving the same [=DID controller=]. This persistence is valuable for establishing long-term identity and trust but directly conflicts with privacy goals that require unlinkability between different interactions or contexts.
For scenarios requiring unlinkability, implementers might not reuse the same DID across different contexts or relationships. Instead, create separate [=DIDs=] for different purposes, relationships, or contexts where correlation might be prevented. Implement DID management practices that include regular rotation of [=DIDs=] when appropriate, and clearly document which [=DIDs=] are used for which purposes. Organizations might also provide tooling to help users manage multiple [=DIDs=] and understand the privacy implications of DID reuse. For use cases where both persistence and privacy are required, consider layering additional privacy-preserving mechanisms on top of the DID infrastructure, such as using [=verifiable credentials=] with unlinkable presentation features.
[=DID documents=] contain [=verification methods=] that specify the cryptographic keys and their purposes (authentication, assertion, key agreement, etc.). The complete set of verification methods, their types, cryptographic algorithms, and relationship assignments can serve as a unique fingerprint for a DID, especially when non-standard cryptographic suites are used or when the combination of verification methods is unusual. Even when using common cryptographic suites, the specific number and configuration of verification methods for different purposes can reveal information about the DID controller's intended use cases, security posture, or organizational structure.
To reduce the identifiability of [=DIDs=] through verification method enumeration, implementers might favor standard, commonly used verification method configurations that are widely deployed in the ecosystem. Avoid creating unique or unusual combinations of verification methods unless specifically required. When possible, use the minimum number of verification methods necessary for the intended functionality to reduce the uniqueness of the configuration. For organizations deploying multiple [=DIDs=], consider standardizing on common verification method templates that are used across many [=DIDs=] to prevent fingerprinting. Documentation might also guide users on recommended verification method configurations that balance functionality with privacy considerations.
DID documents can include service endpoints that specify how to interact with services related to the [=DID controller=]. These service endpoints often contain URLs or other network identifiers that reveal information about the DID controller's infrastructure, service providers, or operational environment. Service endpoints might point to specific servers, domains, or third-party services, which can be used to correlate [=DIDs=], identify the organizations or individuals behind them, or map out relationships between different entities. Even when service endpoints use common infrastructure, the specific combination or configuration of services can serve as an identifying characteristic.
Implementers might carefully consider whether service endpoints need to be included in the [=DID document=] itself, or whether they could be communicated through other channels that provide better privacy properties. When service endpoints might be included, use generic or shared infrastructure that does not reveal specific information about the [=DID controller=]. Consider using privacy-preserving relay services, proxy servers, or shared service infrastructure that is used by multiple entities to prevent correlation. Avoid including service endpoints that point to unique or rarely used domains. For higher privacy scenarios, service endpoint information might be exchanged out-of-band or through encrypted communication channels rather than being published in the public [=DID document=].
When [=witness=] services attest to [=cryptographic event log=] events, they create proofs that include metadata such as their [=witness=] identifier, the cryptographic suite used, and timing information. While this metadata is necessary for verification, it can also reveal information about the [=DID controller=]'s relationships with [=witness=] services, their [=witness=] selection strategy, and potentially their geographic location or operational preferences. If a DID consistently uses [=witnesses=] that are associated with specific geographic regions, industries, or organizations, this association can reveal information about the [=DID controller=]. Additionally, the specific combination of [=witness=] services chosen might reveal business relationships or trust relationships that the [=DID controller=] has established.
To mitigate privacy concerns related to [=witness=] metadata, implementers might select [=witness=] services that are widely used and geographically distributed to avoid revealing location information. When possible, use [=witness=] services that are operated by neutral, well-known entities rather than industry-specific or organization-specific [=witnesses=] that might reveal affiliations. Documentation might guide users on selecting [=witness=] services that align with their privacy requirements, and ecosystems might encourage the deployment of diverse, broadly available [=witness=] services that can be used interchangeably to enhance herd privacy.
The cryptographic algorithms and key types chosen during DID creation and throughout the DID's lifecycle represent long-term commitments that are permanently recorded in the [=cryptographic event log=]. These cryptographic choices can reveal information about when the DID was created (based on algorithm popularity at that time), the security requirements or preferences of the [=DID controller=], and potentially the systems or software used to create the DID. As cryptographic algorithms age and new algorithms are adopted, the continued use of older algorithms or the early adoption of newer ones can serve as identifying characteristics. Furthermore, the cryptographic choices reveal the [=DID controller=]'s security posture and risk tolerance.
Implementers might provide clear guidance on recommended cryptographic algorithms that balance security requirements with privacy considerations. Using widely adopted, current best-practice cryptographic algorithms helps provide herd privacy by ensuring that many [=DIDs=] share similar cryptographic profiles. Organizations might plan for cryptographic agility by supporting algorithm migration paths that allow updating to newer algorithms without requiring DID replacement. When upgrading cryptographic algorithms, coordinate with broader ecosystem adoption to avoid standing out as an early or late adopter. Documentation might explain the privacy implications of different cryptographic choices and provide recommendations for standard configurations that are appropriate for different use cases.
Resolving a DID to obtain its current [=DID document=] and verify the [=cryptographic event log=] requires accessing the log data, which is stored in various locations. The act of resolving a DID can reveal information about the resolver's interest in that particular DID to the entities operating the storage or retrieval infrastructure. This creates potential for surveillance, tracking of resolution patterns, or profiling of which [=DIDs=] are being resolved by which entities. Network-level metadata such as IP addresses, timing of resolution requests, and patterns of correlated resolutions can further compromise privacy by revealing resolver identity or activities.
To enhance resolution privacy, implementers might use privacy-preserving resolution mechanisms such as proxies, Oblivious HTTP, VPNs, or onion routing when resolving DIDs. Aggressive caching of [=DID documents=] and [=cryptographic event log=] data reduces the frequency of resolution requests and limits the metadata exposed to storage providers. Consider using decentralized resolution infrastructure that does not rely on single points that can monitor resolution patterns. Implement resolution protocols that minimize metadata leakage, such as fetching data through encrypted channels or using obfuscation techniques that prevent correlation of multiple resolution requests. For high-privacy scenarios, design systems that pre-fetch or batch resolve multiple [=DIDs=] to obscure which specific [=DIDs=] are of interest, and utilize privacy-preserving query protocols where available.
This section contains a variety of security considerations that people using the `did:cel` Method are advised to consider before deploying this technology in a production setting. Readers are urged to read the Security Considerations section of the [[[CID]]] specification, as well as the Security Considerations section of the [[[DID]]] specification, before reading this section.
The security of the [=cryptographic event log=] relies on [=witness=] services providing independent attestations to [=DID document=] operations. If a sufficient number of [=witnesses=] collude or are compromised by an attacker, they could deny witnessing calls from specific IP addresses or generate invalid proofs to deny a [=DID controller=] the ability to witness a particular event. The impact of such an attack depends on how many [=witnesses=] are required for an event to be considered valid and how many [=witnesses=] an attacker can control. If [=witnesses=] are operated by a single entity or share common infrastructure, the risk of coordinated compromise increases significantly.
To mitigate [=witness=] collusion risks, implementers are encouraged to select [=witnesses=] operated by independent entities with diverse operational and jurisdictional characteristics. Systems that verify [=DIDs=] can require attestations from a minimum number of [=witnesses=] before accepting an event as valid, and can maintain lists of trusted [=witnesses=] with known good security practices. [=DID controllers=] are encouraged to select geographically and organizationally distributed [=witnesses=] to minimize the risk of coordinated attacks. Monitoring [=witness=] behavior over time can help detect anomalies that might indicate compromise, such as [=witnesses=] consistently refusing to signing events that other [=witnesses=] sign. For critical applications, implementing [=witness=] rotation policies or requiring attestations from a larger pool of [=witnesses=] can provide additional security margins.
When a cryptographic key used in a [=DID document=] is compromised or needs to be rotated for operational reasons, the [=DID controller=] faces the challenge of updating the [=DID document=] while maintaining the integrity and verifiability of the [=cryptographic event log=]. The key rotation process itself requires signing operations using existing keys, but if those keys are compromised, an attacker might perform unauthorized rotations before the legitimate [=DID controller=] can act. Additionally, the immutable nature of the log means that compromised keys remain visible in the historical record, potentially allowing attackers to forge signatures on historical [=DID documents=] if proper expiration handling is not implemented.
Implementers are encouraged to implement key expiration mechanisms using the `expires` property on [=verification methods=], which limits the time window during which a compromised key can be abused. When rotating keys, [=DID controllers=] are advised to follow a process of first adding new keys to the [=DID document=], then expiring old keys, and only after confirming the new keys are functional should the old keys be removed. This overlapping approach ensures continuity of operations during rotation. For critical keys, maintaining offline backup keys with authority to perform emergency rotations can provide recovery paths when primary keys are compromised. Conforming systems that verify [=data integrity proofs=] check [=verification method=] expiration times and reject proofs created after a [=verification method=] has expired, limiting the damage from compromised keys. Regular key rotation schedules, even without evidence of compromise, can reduce the impact window of undetected key compromises.
The `did:cel` method relies on SHA3-256 cryptographic hashing for multiple security-critical functions: generating [=DID=] identifiers from [=DID documents=], creating hash links between events in the [=cryptographic event log=], and ensuring the integrity of the event chain. If SHA3-256 were to become vulnerable to collision attacks where two different inputs produce the same hash output, attackers could create fraudulent [=DID documents=] that resolve to the same [=DID=] identifier, forge event chain links to hide modifications, or substitute malicious events while maintaining apparent chain integrity. The security of the current [=DID=] method depends on the continued collision resistance of SHA3-256, though upgrading to a newer cryptographic hashing algorithm is possible and supported.
While SHA3-256 is currently considered cryptographically secure with no known practical collision attacks, implementers are advised to monitor cryptographic research for any developments that might weaken SHA3-256. Systems can be designed with cryptographic agility in mind, allowing for future migration to stronger hash functions if SHA3-256 becomes compromised. When verifying [=DIDs=], conforming implementation validate not just that hash links are correctly formed, but also that the entire chain of events maintains integrity from the creation event forward. For long-lived [=DIDs=], [=DID controllers=] might consider planning for eventual migration to stronger cryptographic primitives before SHA3-256 reaches end of life, though the timeline for such migrations is measured in decades given current cryptographic knowledge.
Events in the [=cryptographic event log=] contain [=data integrity proofs=] and [=witness=] attestations that are valid cryptographic signatures. Without proper safeguards, an attacker might capture a valid event from one [=DID=]'s log and attempt to replay it in a different context, such as in another [=DID=]'s log or at a different position in the same log. While the [=hashlinking=] mechanism provides some protection by binding events to specific positions in the chain, attackers might attempt to fork a log at an earlier point and replay captured events to create a fraudulent alternate history. The self-contained nature of events means that signatures remain mathematically valid even when presented out of their intended context.
The [=hashlinking=] of events through the `previousEventHash` property provides the primary defense against replay attacks by binding each event to a specific position in the event chain. Conforming implementations that verify [=cryptographic event logs=] validate that each event's `previousEventHash` hash correctly matches the hash of the preceding event, and that the chain is unbroken from the creation event to the most recent event. Systems that maintain state about known [=DIDs=] can track the highest event sequence number they have seen for each [=DID=] and reject events that appear to rewind the log.
The [=cryptographic event log=] creates a linear chain of events through [=hashlinking=], but without additional coordination mechanisms, a [=DID controller=] or an attacker with access to the [=DID controller=]'s keys could create multiple divergent chains (forks) from the same parent event. This could result in different observers seeing different versions of the [=DID document=] depending on which fork they have accessed. A compromised or malicious [=DID controller=] might try to selectively present different forks to different [=verifiers=]], creating confusion about the authoritative state of the [=DID document=].
Detecting forks requires [=verifiers=] to compare event logs from multiple [=cryptographic event log=] storage services listed in the DID Document history and identify cases where divergent chains exist for the same [=DID=]. The [=cryptographic event log=] storage services-based architecture provides fork detection that is independent of the [=witness=] services. Clear policies about how [=verifiers=] handle fork detection, such as "longest fork always wins", and whether [=cryptographic event log=] services alert each other or [=DID=] observers, can strengthen the overall security posture against forking attacks.
Each event in a [=cryptographic event log=] contains [=data integrity proofs=] on the operation and one or more [=witness=] proofs attesting to the event. Verifying the complete log requires checking every proof in every event from the creation event forward, which involves cryptographic signature verification operations, hash computations, and canonicalization of JSON data structures. For [=DIDs=] with long histories containing many events and [=witness=] proofs, the computational cost of full verification can be significant, potentially leading to resource exhaustion or denial-of-service conditions when processing maliciously crafted logs with excessive events or operation data.
Implementations are encouraged to set reasonable limits on the resources they will expend when verifying [=cryptographic event logs=], including maximum numbers of events to process, maximum verification time, and maximum memory usage. When possible, verification can be optimized by caching the results of verifying earlier portions of the log and only verifying new events since the last cached verification point. For applications that do not require complete historical verification, implementations might depend on the [=cryptographic event log=] storage services to verify the history since they are not supposed to accept cryptographic event logs with malformed histories. Rate limiting the acceptance of [=DID=] resolution requests can prevent attackers from overwhelming systems with verification requests for complex [=DID documents=]. Clear documentation about the expected verification costs and performance characteristics helps implementers make informed decisions about resource allocation for [=DID=] verification operations.
The [=witness=]-based architecture of `did:cel` creates an operational dependency on [=witness=] services being available and responsive when [=DID controllers=] need to update their [=DID documents=]. If [=witnesses=] become unavailable due to network outages, operational failures, or deliberate denial-of-service attacks, [=DID controllers=] might be unable to record new events in their [=cryptographic event logs=], effectively preventing them from making necessary updates such as key rotations in response to security incidents. This availability dependency could be exploited by attackers who compromise a [=DID controller=]'s keys and then launch denial-of-service attacks against [=witness=] services to prevent the legitimate controller from performing emergency key rotations.
[=DID controllers=] are encouraged to select multiple geographically and operationally distributed [=witnesses=] to reduce the risk of correlated failures. Systems can be designed to tolerate some [=witness=] unavailability by requiring only a subset of [=witnesses=] to attest to an event, though this reduces the security assurance provided by [=witness=] attestations. Implementing [=witness=] selection policies that automatically failover to alternative [=witnesses=] when primary [=witnesses=] are unavailable can maintain operational continuity during outages. For critical operations, [=DID controllers=] might pre-establish relationships with backup [=witness=] services that can be used if primary [=witnesses=] fail. [=Witness=] services are encouraged to implement high-availability architectures with redundancy and failover capabilities to minimize downtime. Clear service level agreements and monitoring of [=witness=] availability helps [=DID controllers=] make informed decisions about which [=witnesses=] to rely on for their security requirements.
The `did:cel` method uses modern cryptographic signing and hashing algorithms. While these cryptographic algorithms are currently considered secure, all cryptographic algorithms have a finite lifetime and eventually become vulnerable to attacks as computational capabilities advance or new cryptographic breakthroughs occur. [=DIDs=] created today might need to remain valid and verifiable for many years or decades, potentially outliving the security guarantees of the cryptographic algorithms they employ. As algorithms age, signatures and hashes created using those algorithms provide diminishing security. The immutable nature of the [=cryptographic event log=] means that while historical events cannot be re-signed with stronger algorithms, the historical chain of events can be signed with stronger algorithms (such as post-quantum signatures) under certain conditions (such as the security of SHA3-256 remaining uncompromised even if ECDSA becomes compromised).
[=DID controllers=] planning for long-term use of their [=DIDs=] are encouraged to monitor cryptographic algorithm recommendations and plan for upgrading algorithms or eventually migrating to [=DIDs=] using stronger algorithms before current algorithms are compromised. Implementing cryptographic layering by including multiple [=verification methods=] using different cryptographic algorithms, such as pre-quantum and post-quantum cryptographic algorithms, can provide continued security even if one algorithm is compromised, following the guidance in the [[[?VC-DATA-INTEGRITY]]] specification. Systems that verify [=DIDs=] can maintain policies about which cryptographic algorithms are acceptable and reject [=DIDs=] using deprecated algorithms that no longer provide adequate security. For historical preservation, the [=witness=] attestations in the log provide evidence that events were considered valid at the time they were created, even if the underlying cryptographic algorithms later become compromised. Clear documentation about the expected lifetime of cryptographic algorithms helps [=DID controllers=] plan appropriate migration timelines.
[=Witness=] [=data integrity proofs=] include timestamp information indicating when the [=witness=] attested to an event. While timestamps can be useful for understanding the temporal sequence of events and detecting anomalies, they also introduce potential for manipulation if [=witnesses=] collude or if their clocks are inaccurate or maliciously adjusted. An attacker who compromises multiple [=witnesses=] might create backdated attestations to make fraudulent events appear to have occurred earlier than they actually did, or forward-date attestations to hide the timing of malicious activities. Relying on timestamps for security-critical decisions can be problematic given the difficulty of ensuring accurate, tamper-proof time sources across distributed [=witness=] services.
Implementations are encouraged to use timestamps as informational indicators rather than security-critical values, and to corroborate timing information from multiple independent sources, such as the [=cryptographic event log=] storage services, before relying on it. When [=witnesses=] include timestamps in their proofs, using multiple [=witnesses=] with independent time sources can help detect significant clock discrepancies or manipulation attempts. Systems that detect large discrepancies in [=witness=] timestamps for the same event can flag those events for additional scrutiny. The [=hashlinked=] structure of the event log provides a partial ordering guarantee—events must have occurred in the order they appear in the chain—which are more reliable than timestamp values for determining relative event ordering. Clear documentation about the limitations of timestamp information helps prevent over-reliance on timing data that might be manipulated.
[=Cryptographic event logs=] are stored by storage services listed in the [=DID Document=] and made available during [=DID=] resolution, but the specification does not mandate a particular storage mechanism. Depending on the chosen storage approach—whether distributed ledgers, decentralized storage systems, centralized repositories, or peer-to-peer networks—different security considerations apply. If log storage is compromised, attackers might delete events to hide malicious modifications, modify historical events to change the [=DID document=] history, or deny access to legitimate log data preventing [=DID=] resolution. The integrity and availability of the storage system directly impacts the security and usability of the [=DID=].
[=DID controllers=] are encouraged to store their [=cryptographic event logs=] with multiple independent storage services to protect against data loss and ensure availability even if one storage location fails or is compromised. The [=hashlinked=] structure and [=witness=] proofs in the log provide integrity protection, meaning that modifications to stored log data can be detected during verification, but this does not prevent deletion or denial of access. Using storage systems with built-in redundancy, access controls, and audit logging can help protect log data from unauthorized modification or deletion. For public [=DIDs=], publishing logs to multiple publicly accessible storage services increases resilience and makes censorship more difficult. Implementations that retrieve logs are encouraged to verify the complete integrity of the log using the [=hashlinking=] and proofs, rather than trusting that stored data is authentic. Backup strategies that maintain copies of logs in geographically and operationally diverse locations can ensure long-term preservation and availability.
Each operation in the [=cryptographic event log=] contains a [=data integrity proof=] that cryptographically binds the operation to the [=DID document=] through a signature created with a [=verification method=] contained in the document. [=Witnesses=] proofs are also attached to each event in the [=cryptographic event log=]. Proper validation of these proofs is critical for security—if implementations skip proof verification or perform it incorrectly, they might accept fraudulent [=DID documents=] that were not actually created by the legitimate [=DID controller=]. The process of [=data integrity proof=] verification, which involves JSON processing, canonicalization, and signature verification, creates opportunities for implementation errors that could compromise security.
Implementations are strongly encouraged to use well-tested, standardized libraries for [=data integrity proof=] verification rather than implementing verification logic from scratch, as cryptographic implementations are prone to subtle errors that can completely compromise security. Following the verification algorithms specified in the [[[VC-DATA-INTEGRITY]]] specification precisely, without shortcuts or optimizations that might skip critical validation steps, is important for maintaining security. Verification processes are encouraged to validate all aspects of the proof including the signature value, the proof purpose, the verification method reference, and any additional proof metadata. Regular testing with both valid and intentionally malformed proofs can help ensure that implementations correctly reject invalid proofs. Implementers are advised to review the security considerations in the [[[VC-DATA-INTEGRITY]]] specification for additional guidance on proof verification.
The integrity of the [=cryptographic event log=] depends on each event containing a correct hash of the previous event in the `previousEventHash` property, creating an unbroken chain from the creation event to the most recent event. If implementations fail to properly verify that each `previousEventHash` hash matches the actual hash of the preceding event, or if they skip hash verification entirely, attackers could insert fraudulent events, remove events from the middle of the chain, or create alternate histories without detection. This [=hashlinking=] is fundamental to the security model—without rigorous verification, the log provides no integrity guarantees.
Implementations that verify [=cryptographic event logs=] are advised to compute the hash of each event using the same canonicalization and hashing algorithms specified in the creation and update algorithms, and verify that the computed hash matches the `previousEventHash` value in the following event. Verification processes are encouraged to validate the complete chain from the creation event forward without skipping any events, as gaps in verification could hide inserted or modified events. The creation event, which has no previous event, represents a trust anchor and its authenticity can be verified through [=witness=] attestations and the [=data integrity proof=] on the operation. For performance reasons, implementations might cache verification results for earlier portions of the chain, but care is needed to ensure cached results are not reused when the log has been modified. Clear error handling and reporting when hash verification fails helps diagnose attacks or data corruption issues.
The `did:cel` specification includes [=witness=] attestations for events but does not mandate how many [=witness=] proofs are required for an event to be considered valid or trustworthy. Different applications and trust models might require different numbers of [=witnesses=]—some might accept events with a single [=witness=] attestation, while others might require multiple [=witnesses=]. If the threshold is set too low, attackers who compromise a small number of [=witnesses=] might be able to gain a denial-of-service advantage. If set too high, legitimate operations might be blocked by [=witness=] unavailability or disagreement, creating denial-of-service conditions.
Implementations are encouraged to establish clear policies about [=witness=] proof requirements that balance security needs against operational flexibility. For high-security applications, requiring attestations from a majority or supermajority of a defined [=witness=] set provides protection against individual [=witness=] compromises. Lower-security applications might accept events with fewer attestations but implement monitoring to detect suspicious patterns. Policies can be adaptive, such as requiring more [=witnesses=] for security-critical operations like key rotations while accepting fewer [=witnesses=] for routine updates. When [=witnesses=] seem to be acting in a way that would lead to a denial of service, implementations are encouraged to investigate the discrepancy. Clear documentation about [=witness=] requirements helps [=DID controllers=] understand what level of [=witness=] attestation they need to achieve for their operations to be accepted by [=verifiers=]. [=Witness=] selection strategies that include [=witnesses=] with diverse operational and governance models can provide more robust validation than [=witnesses=] operated by related entities.
The security of [=DID document=] modifications relies on the [=DID controller=] maintaining exclusive control over the private keys corresponding to [=verification methods=] in the [=DID document=]. If an attacker gains access to these private keys through theft, social engineering, malware, or cryptographic compromise, they can create fraudulent updates to the [=DID document=] that appear legitimate because they carry valid [=data integrity proofs=]. Such unauthorized modifications might add attacker-controlled [=verification methods=], remove legitimate verification methods, change service endpoints, or modify other [=DID document=] properties, all while appearing to be authorized changes in the [=cryptographic event log=].
[=DID controllers=] are strongly encouraged to implement rigorous key management practices including secure generation, storage, and access control for private keys. Using hardware security modules, secure enclaves, or other tamper-resistant storage for keys that control critical [=DIDs=] can significantly reduce the risk of key theft. Regular security audits, monitoring for unexpected [=DID document=] changes, and maintaining offline backups of key material can help detect and recover from compromises. Implementing the principle of least privilege by using different [=verification methods=] for different purposes, with only minimal keys having authority to modify the [=DID document=], can limit the impact of key compromises. When unauthorized modifications are detected, rapid key rotation following the guidance in the key rotation section can help regain control and prevent further unauthorized changes.
As cryptographic algorithms age and newer, stronger algorithms become available, [=DID documents=] might contain [=verification methods=] using different cryptographic suites with varying security strengths. An attacker might attempt to force systems to use weaker cryptographic algorithms by by exploiting verification implementations that default to weaker algorithms when multiple options are available. If successful, such downgrade attacks reduce the effective security to that of the weakest algorithm, potentially enabling signature forgeries or other cryptographic breaks that would not be possible against stronger algorithms.
Implementations are encouraged to maintain policies about which cryptographic algorithms are acceptable and to reject [=cryptographic event logs=] or proofs using algorithms that no longer provide adequate security. When [=DID documents=] contain multiple [=verification methods=] using different algorithms, verification processes are advised to require proofs using the strongest available algorithm rather than accepting weaker proofs. [=DID controllers=] are encouraged to remove deprecated [=verification methods=] using obsolete algorithms from their [=DID documents=] once stronger methods are in place, rather than maintaining weak methods for backward compatibility. Clear documentation about supported cryptographic algorithms and their expected security lifetimes helps both [=DID controllers=] and verifiers make informed decisions about algorithm selection. Following the cryptographic agility guidance in the [[[VC-DATA-INTEGRITY]]] specification enables smooth transitions to newer algorithms without compromising security during migration periods.
The Working Group would like to thank the following individuals for reviewing and providing feedback on the specification (in alphabetical order):
TBD...
This section provides some examples of how `did:cel` is used in various other protocols and data formats.
The example below demonstrates a Verifiable Presentation Request query requesting that the [=holder=] verify control of a `did:cel` [=DID=]:
{
"query": [{
"type": "DIDAuthentication",
"acceptedMethods": [{"method": "cel"}]
}],
"challenge": "99612b24-63d9-11ea-b99f-4f66f3e4f81a",
"domain": "domain.example"
}
When using `did:cel`, the [=holder=] responds with the following DID Authentication response:
{
"@context": ["https://www.w3.org/ns/credentials/v2"],
"type": "VerifiablePresentation",
"holder": "did:cel:zW1poyeoHaT1WftFBG8WKa6fCaH98tbKKMrkJWDqFeohTz1?storage=https%3A%2F%2Fstorage.example%2Fdids%2F",
"proof": {
"type": "DataIntegrityProof",
"cryptosuite": "ecdsa-rdfc-2019",
"verificationMethod": "did:cel:zW1poyeoHaT1WftFBG8WKa6fCaH98tbKKMrkJWDqFeohTz1?storage=https%3A%2F%2Fstorage.example%2Fdids%2F#zDnaeU8aVJn9pZjH7f249geMiz9UTRjfXUcc5pN4QsxVte57q",
"challenge": "99612b24-63d9-11ea-b99f-4f66f3e4f81a",
"domain": "domain.example",
"created": "2025-12-07T14:58:42Z",
"proofPurpose": "authentication",
"proofValue": "z4vUFeRUpqaFLiSwkzN2Qiugw1yeA2DgJC6MTpqiwwzBkSNdaNMSYXAgLWC8UsJFb8dymbDQiBaQySuWGjMxBhGiZ"
}
}
The DID Authentication response above instructs the [=verifier=] that the `did:cel` [=cryptographic event log=] for the provided [=DID=] can be found by fetching the following URL: `https://storage.example/dids/did:cel:zW1poyeoHaT1WftFBG8WKa6fCaH98tbKKMrkJWDqFeohTz1.cel.gz`.
The example below demonstrates the use of a `did:cel` [=DID=] within a [=verifiable credential=]:
{
"@context": [
"https://www.w3.org/ns/credentials/v2",
"https://www.w3.org/ns/credentials/examples/v2"
],
"type": [
"VerifiableCredential",
"ExampleDegreeCredential"
],
"issuer": "did:cel:zW1poyeoHaT1WftFBG8WKa6fCaH98tbKKMrkJWDqFeohTz1?storage=https%3A%2F%2Funiversity.example%2Fdids%2F",
"validFrom": "2010-01-01T00:00:00Z",
"credentialSubject": {
"id": "did:cel:zW1aH98tbKKqFeohTz1poyeoHMrkJWDaT1WftFBG8WKa6fC?storage=https%3A%2F%2Fstorage.example%2Fdids%2F",
"degree": {
"type": "ExampleBachelorDegree",
"name": "Bachelor of Science and Arts"
}
},
"proof": {
"type": "DataIntegrityProof",
"created": "2025-12-07T19:47:10Z",
"verificationMethod": "did:cel:zW1poyeoHaT1WftFBG8WKa6fCaH98tbKKMrkJWDqFeohTz1?storage=https%3A%2F%2Funiversity.example%2Fdids%2F#zDnaeU8aVJn9pZjH7f249geMiz9UTRjfXUcc5pN4QsxVte57q",
"cryptosuite": "ecdsa-rdfc-2019",
"proofPurpose": "assertionMethod",
"proofValue": "z3oB59kTTajyXKx3FYJGk9Xvb158rLG2Pq6FQSNaGL6J9ieA3ipXGot5zWME4TXisz1vpfTo8DFjLpBwe4afL1r27"
}
}