
EUDI Interoperability? Why is it hard?
Introduction
The European Digital Identity (EUDI) is a big step toward making life easier for people and businesses across the EU. The idea is simple: one secure digital wallet you can use to access both public and private services in any EU country. Behind the scenes, though, things aren’t quite so simple. EUDI is defined through a mix of legal rules, technical plans, and international standards, including:
- eIDAS 2.0 (Regulation (EU) 2024/1183) [1]: The legal foundation sets the rules.
- Architecture and Reference Framework (ARF) [2]: The blueprint for how the EUDI Wallet should work.
- Technical Standards: These help make everything compatible and secure. Some of the main ones are:
In this blog, we’ll explore the technical side, how these standards are used in practice, and why making them all work together isn’t as easy as it sounds. All examples come from real situations we’ve encountered while trying to build and connect EUDI solutions.
Interoperability Isn’t One Thing - It’s a Stack
When discussing technical interoperability in the EUDI ecosystem, it’s tempting to imagine a simple scenario: if the Issuer, Holder, and Verifier follow the same standard, everything should work, right?
Not quite.
Interoperability isn’t just one thing - it’s a stack of things. Just as an electrical grid needs more than matching plug shapes (you also need the right voltage, frequency, grounding, and wiring), digital identity systems need alignment across multiple technical layers. The following chapters explore the key interoperability layers that must align for things to work smoothly.
Credential Schemas: Do All Parties Understand the Structure and Meaning of the Data?
A key part of technical interoperability is ensuring that all parties, the Issuer, Holder, and Verifier, agree not just on how to exchange data but also on what the data means. This is where credential schemas come into play.
Let’s take a simple example: imagine different EU member states issuing digital passports as verifiable credentials. On paper, they’re all doing the same thing, but in practice, the details can vary. Consider the field for a passport number:
- One country might label it
passport_number
- Another might use
passport_no
To a human, these mean the same thing. But to a verifier system, like an airline checking your passport, those are completely different field names. Unless the verifier knows exactly what to look for, it might miss critical data. The same issue applies to other common attributes:
given_name
vs.first_name
surname
VS.family_name
Even date formats can cause confusion-2025-05-08
vs. 08/05/2025
- which one is correct?
There’s also the question of character encoding: names can be written in different alphabets (e.g., Cyrillic, Greek, Latin), which affects matching and parsing.
One way to reduce ambiguity is to adopt a common schema, like the one defined for eu.europa.ec.eudi.pid.1
[6]. This provides a shared structure and naming convention that issuers and verifiers can rely on. But even this isn’t a silver bullet. Optional (non-mandatory) fields can still vary from one credential to another. So, the question remains:
How can a verifier know in advance which attributes will be present?
Credential Formats: Are they using compatible formats?
A key aspect of interoperability in EUDI is whether everyone is using compatible credential formats. The specification supports several formats:
- W3C Verifiable Credentials:
- JWT VC without JSON-LD
- JWT VC with JSON-LD
- Data Integrity VC with JSON-LD and Linked Data Proofs
- ISO Mobile Driving Licence (mDL) encoded in CBOR.
- IETF SD-JWT VC
Each format has different technical requirements that affect how credentials are issued and verified. For example, ISO MDL requires the direct_post.jwt
method in the verification flow. If any system component - issuer, wallet, or verifier does not support a given format, interoperability cannot be achieved. Agreeing on supported formats and ensuring consistent implementation across the ecosystem is essential for seamless integration.
Credential Query Language: Can Verifiers express their needs, and can Holders understand and respond correctly?
Interoperability depends not only on the credential format but also on how verifiers request information and how holders respond. In EUDI, this interaction is defined through credential query languages used in the OpenID4VP [4] protocol. The specification currently recognizes two query languages:
- DIF Presentation Exchange
- Digital Credentials Query Language (DCQL)
For interoperability, both the verifier and the holder must understand and support the same query language. The verifier defines the request, and the holder must be able to interpret and respond correctly. A recent development complicates matters: starting with OpenID4VP [4] draft version 26, DIF Presentation Exchange is no longer supported. Only DCQL remains the required query language. This change has significant implications, especially for existing implementations based on the older drafts.
Trust Model: Do the components trust each other?
In the EUDI ecosystem, trust between the issuer, holder, and verifier is essential. Without it, the secure exchange of credentials simply doesn’t work. But while the idea of “trust” sounds straightforward, implementing it in a way that’s interoperable across multiple systems is one of the most complex challenges we’ve faced. At a minimum, TLS is required to ensure encrypted communication. But encryption alone isn’t enough. Each component must also be able to identify and validate the others. The EUDI architecture introduces a shared trust infrastructure, where components can discover and verify one another’s metadata, certificates, and identifiers. The details, however, are far from simple.
Trust Between Holder and Issuer
When a credential is issued, the issuer must bind its identity to the credential. This identifier might be:
- URL
- Decentralized Identifier (DID)
- X.509 certificate chain
Each of these methods requires specific implementation support on both sides. The holder must validate the issuer’s identity against a trusted source.
Trust isn’t a one-way street. The issuer must also trust the holder, especially the wallet storing the credential. This means verifying that the holder’s environment is secure and meets the issuer’s requirements. Several mechanisms support this, such as:
- Key attestation
- Client authentication
- Wallet attestation
An important step in the issuance process is key binding, where the issuer ensures that the holder controls the private key tied to the credential. There are multiple ways to do this, e.g. by using a JWK, a DID, an X.509 certificate chain, or through attestation mechanisms. Choosing the right method depends on the use case, and each has its trade-offs.
Trust Between Holder and Verifier
When a verifier requests a credential presentation, the holder must evaluate the verifier’s identity and legitimacy before sharing any data. This can be based on several mechanisms, including:
- Pre-registration
- Redirect URI validation
- OpenID Federation
- DID
- Verifier attestation
- X.509
Authorization requests may be digitally signed, which means the holder must validate those signatures, too. This ensures the request is authentic and hasn’t been tampered with. Once the holder responds, the verifier must check the holder’s signature on the presentation to confirm its integrity and authenticity.
Trust Between Issuer and Verifier
Finally, the verifier must trust the issuer of the credential being presented. This includes:
- Verifying the issuer’s signature on the credential
- Checking that the issuer is recognized in a trusted infrastructure
Again, this validation may depend on DIDs, X.509 chains, or others, and the choice of method affects what infrastructure the verifier must support. The complexity comes from the many different ways trust can be implemented.
Implementing a full DID framework can be a major project in itself, and its components often don’t share the same trust infrastructure. There are so many options, and when you start combining them across systems, the number of possible variations grows rapidly and becomes hard to manage.
Data Encodings (QR codes): Are they using the same structure and rules when encoding requests?
QR codes play a key role in the EUDI protocols. They appear in two critical moments:
- During issuance, where the issuer delivers a credential offer to the holder.
- During verification, where the verifier sends an authorization request to the holder.
At both stages, QR codes are used to transport requests, but how the data is encoded varies, and that creates interoperability challenges.
Value vs. Reference Encoding
Credential offers and authorization requests can be encoded in two ways:
- As value the full payload is embedded directly in the QR code.
- As a reference the QR code contains only a URL that the wallet can resolve to fetch the full request.
While both options are valid, reference encoding is generally preferred, especially in production settings, because value-based encoding can produce massive QR codes that are difficult to scan reliably.
Custom URI Schemes
Another issue is the lack of consistency in URI schemes. These schemes tell the wallet how to interpret the content of the QR code. For credential offers, the scheme is fairly standardized:
openid-credential-offer://
But for authorization requests, there’s no clear standard, and multiple schemes are currently in use across different implementations:
openid4vp://
eudi-openid4vp://
mdoc-openid4vp://
openid-vc://
haip://
This inconsistency makes implementation harder, as wallets must support all of these schemes, often without clear documentation or guidance.
Challenges with Query Parameters
In addition to URI schemes, authorization requests may also be encoded as query parameters. This adds another layer of complexity. While query strings are inherently flat, many parameters (such as presentation_definition
) are structured as nested JSON. Encoding and parsing such data within a flat URL structure is non-trivial and error-prone.
Protocol Branches: Are they following the same flow, or diverging into different protocol versions?
One of the challenges in EUDI interoperability is the number of protocol variants that exist. These variations can significantly impact compatibility between components, even when they technically follow the same standard. Let’s look at a few key examples to understand where protocol divergence can occur.
Issuance Flow
There are two main authorization flows in credential issuance:
- Authorization Code Flow: This flow is similar to the OAuth 2.0 Authorization Code Grant, involving user interaction and redirect-based authorization.
- Pre-Authorized Code Flow: A simpler variant where authorization is handled before the protocol starts, removing the need for interactive login during issuance.
But these aren’t the only variables. The protocol also allows for different flow configurations:
- Wallet-initiated vs. Issuer-initiated
- Same-device vs. Cross-device
- Immediate vs. Deferred issuance
There are even optional security enhancements, such as:
tx_code
, a second step in the pre-authorized flow, acting like a two-factor token.nonce
usage to bind proofs to the session and prevent replay attacks.
Each of these options adds flexibility, but they also introduce complexity. Interoperability depends on both the wallet and the issuer supporting the same subset of features. If one party doesn’t support a specific variant, the protocol may fail.
Verification Flow
The verification flow includes many of the same options as issuance:
- Same-device vs. Cross-device
- Use of
client_id
, which can be handled via:- Pre-registration
- redirect uri validation
- and others
When the holder sends the verifiable presentation, there are two primary response methods:
direct_post
direct_post.jwt
The direct_post.jwt
method is more secure but also more complex. It requires the wallet to create and encrypt a JWT, which involves cryptographic key handling. Importantly, some credential formats like mso_mdoc
(i.e. the credential format behind mDL - Mobile Driving License), only work via direct_post.jwt
, making its implementation non-negotiable for interoperability.
The Interoperability Challenge
What this tells us is clear: protocol flexibility comes at the cost of interoperability. The more options the protocol allows, the more effort is required to ensure all parties implement the same subset of features.
Cryptography: The Foundation Behind Everything
None of this would be possible without cryptography. At its core, interoperability in EUDI depends on cryptographic compatibility, the ability for different components to sign, verify, encrypt, and decrypt using the same agreed-upon algorithms. To ensure alignment, the EUDI ecosystem follows a shared baseline defined in the SOGIS Agreed Cryptographic Mechanisms 1.2 [7]. These guidelines set the standard for which cryptographic algorithms are acceptable in secure digital identity systems across the EU.
What Happens When Algorithms Don’t Match?
Here’s a simple example:
If the holder signs proofs using the ES256 algorithm, but the issuer only supports RS256, the issuer can’t verify the proof, and therefore holder can’t receive the credential. Even though both parties are technically compliant with the protocol, they’re cryptographically incompatible. This type of mismatch isn’t just theoretical, it’s a real-world problem that often appears during integration between independently developed components.
Looking Ahead: The Cryptographic Landscape Is Evolving
We’re already seeing discussions around next-generation cryptographic schemes such as:
BBS+
and Camenisch-Lysyanskaya (CL) for advanced selective disclosure and unlinkability.- Post-quantum algorithms, like
SLH-DSA-SHAKE-192s
andML-DSA-65
, which may become critical as quantum threats emerge.
If these become mandatory, the interoperability landscape will shift significantly. Supporting them will require major updates to wallets, issuers, and verifiers, as well as aligned agreements across the ecosystem.
Testing: How can we build an environment where interoperability can be tested?
One often overlooked, but critically important aspect of interoperability is testing. While not traditionally considered a separate layer of interoperability, reliably testing and validating integration between components is essential for real-world success. We encountered several practical challenges in our experience testing interoperability between an EUDI Wallet (acting as the holder) and FortID components (issuer and verifier).
Mobile Wallets Are Hard to Automate
One of the main difficulties is the fact that the EUDI Wallet is a mobile application. Unlike web-based systems, mobile apps don’t offer hooks or interfaces for easy automation. This makes continuous integration and automated interoperability testing much harder, every single test involving the wallet often requires manual interaction
TLS and Certificates in Test Environments
Another common pain point is working with TLS in test environments. The EUDI specifications require secure connections, but it’s more practical to use self-signed certificates in many test setups. These offer flexibility and ease of deployment but often conflict with wallet or browser policies that expect publicly trusted certificate chains. This adds complexity to test setups.
Progress Through Community Contribution
To address some of these issues, we submitted a request [8] to improve the Verifier Reference Implementation-specifically, to add support for handling test scenarios like self-signed certificates. This collaborative feedback and enhancement are crucial for building a testing ecosystem that reflects real-world deployment conditions.
Summary
Each of these layers introduces opportunities for misalignment. Even if two systems use the same overall protocol, a mismatch in just one layer (e.g., a different interpretation of the credential schema or an unsupported cryptographic algorithm) can break interoperability. In the rest of this post, we’ll explore each of these layers in more detail, with real-world examples of where things go wrong, and what it takes to make them work.
Specification challenges
One of the biggest hurdles to interoperability is that some of the key standards, like OpenID4VC ([3] and [4]) and others related to eIDAS 2.0, are still in draft form. That means they’re evolving, not finalized, and sometimes changing in ways that can break existing implementations. We’ve seen firsthand how even small changes in a draft spec-like a tweak in a parameter name, or a shift in the protocol flow-can have a big impact. Suddenly, two systems that used to work together no longer do, simply because they’re following different versions of the same draft.
Let’s look at two examples to get a sense of how impactful even minor spec updates can be:
- In the OpenID4VCI [3] specification, between drafts 13 and 14, a new endpoint was introduced to fetch a fresh nonce. While this change may seem minor, it required additional implementation work and, more importantly, broke backward compatibility.
- In the OpenID4VP [4] specification, the update from draft 21 to 22 introduced new semantics for
client_id
. This wasn’t just a surface-level change it affected multiple parts of our codebase and required deeper adjustments to maintain compatibility.
We encourage readers to check both specifications’ “Document History” sections. You’ll see how frequently and significantly the drafts evolve, often in ways that impact implementations throughout a significant portion of the code base.
Inconsistent Metadata Paths: Another Interoperability Pitfall
Here’s yet another example of how key concepts are not consistently aligned across specifications. When retrieving Credential Issuer metadata, the expected location is defined as [3]:
<Credential Issuer Identifier>/.well-known/openid-credential-issuer
This means the /.well-known
segment is appended to the issuer’s identifier.
However, in other parts of the specification [19] - particularly for different types of metadata - /.well-known
is used as a prefix to the path instead.
For example:
https://issuer.com/protocol/oid4vci/issuer/891046fc-2460-4a8a-873e-840c13b20ad5/.well-known/openid-credential-issuer
https://issuer.com/.well-known/jwt-vc-issuer/protocol/oid4vci/issuer/891046fc-2460-4a8a-873e-840c13b20ad5
The result? Some implementations try both variants, attempting to fetch metadata with /.well-known
as either a prefix or a suffix, simply to cover all bases.
This kind of inconsistency, while small on the surface, adds complexity to implementation and undermines the goal of predictable, interoperable behavior across systems.
Reference Implementation Challenges
Even with solid specs, the actual reference implementations for the Issuer, Holder, and Verifier aren’t perfect, they’re still evolving, just like the standards themselves. And that means: yes, bugs happen.
Some of these bugs are minor and easy to work around. In many cases, once reported, they get fixed quickly. But here’s the real challenge: the fix often lands in the latest version of the component, and upgrading to that version might break compatibility with older components that haven’t been updated yet. This creates a tricky situation: do you stick with the buggy but compatible version, or upgrade and risk breaking interoperability?
At FortId, we take a hybrid approach. When we encounter a bug, we implement a temporary workaround, document the issue carefully, and report it to the maintainers of the reference implementation. Once the bug is fixed, we remove the workaround, as long as the updated version doesn’t break interoperability with other components in our system.
Here are a few of the issues we’ve reported. We’ve encountered bugs across multiple system layers, showing interoperability is on the whole stack. Testing often ends up being an end-to-end test of the Reference Implementation [9] itself, not just how systems connect.
Credential format issues
- OpenID4VP Flow - VP Token Invalid Due to Serialization Order [10]
- Authorization Response header contains invalid apu value [11]
Protocol issues
- RedirectUri Client ID Scheme Not Supported in Reference Wallet (OpenID4VP) [12]
- Missing credential_identifiers field in authorization_detail of Token Response [13]
- Proof JWT contains iss claim in Pre-Authorized Code Flow, but it must be omitted [14]
Cryptography issues
- JWK Coordinate Padding coordinates are not always 32 bytes long [15]
- Verifiable Credential includes
base64
padding, conflicts with specification [16]
Testing issues
- Support for importing Issuer’s TLS Certificate (for testing) [8]
Credential query language issues
- Input Descriptor must only contain id, format and constraints [17]
- Constraints limit_disclosure must be set to required (input descriptors) [18]
Conclusion
Achieving full interoperability in the EUDI ecosystem is a complex challenge. There are two main reasons for this:
- The specifications are still in draft form and continue to evolve, often introducing changes that affect compatibility.
- The specifications are broad and flexible, allowing many implementation choices, which can lead to systems that technically follow the spec but don’t work together in practice.
To reach true interoperability, vendors would need to:
- Stay aligned with the latest version of each specification
- Avoid breaking backward compatibility
- Implement the complete set of features across all roles (Issuer, Holder, Verifier)
In reality, this is difficult, if not impossible. Limited time, budget, and development resources make it hard to keep up. In some cases, different draft versions of a spec introduce conflicting requirements, making full compatibility technically unfeasible.
What’s likely to happen next is a gradual path toward interoperability. Most vendors will first ensure their own systems work end-to-end. Then, partial interoperability will emerge between early adopters.
True cross-vendor and cross-border interoperability will only become a real focus, and hopefully achievable, once each member state finishes and launches its national EUDI framework.