Saturday, June 8, 2013

Tangent: Design thougths about next gen PKI in today's world

With the revelations of massive surveillance by the NSA on Americans, I have thought a bit about how to securely set up end to end cryptography in order to offer guarantees of security.  This is by no means limited to offering security relative to governments, but includes the fact that organized criminals can mount attacks similar to wiretaps using man in in the middle and other possible attack techniques.

Designing a perfectly secure system is probably not possible.  A determined enough attacker will be able to gain access to any communications given enough effort, resources, and determination.  The goals of the system I am describing however would be to maximize the resources required,  and thus force evesdroppers to focus on only the most valuable targets possible and causing as narrow compromises as possible.  Additionally metadata is to some extent impossible to protect to the same extent content is.

In general SSL is good enough for key negotiation etc. if the system can be made resistant (not perfectly secure) against man in the middle attacks and if authorities can be sufficiently trusted and validated over time.

Most key exchange approaches focus solely on minimizing risk at the moment of key exchange (i.e. reducing synchronic risk).  This approach is different in that it focuses on resistance to exposure over time (reducing diachronic risk) and seeking to provide as much notification of compromised communications as possible.

Such an approach will *not* protect people against government spying in jurisdictions where keys can be demanded via subpoena or even warrant.  However this approach seeks to force authorities to seek the keys from the people under surveillance and not rely on digital surveillance.

In general as much as I dislike SSL/TLS (as I dislike pretty much every attempt to port OSI protocols to TCP/IP) it is well developed and well understood and the security dimensions of the protocol are well documented.  Additionally unlike IPSec, it is appropriate for cases where the data may need to be routed and rerouted among different servers, possibly altering routing destinations.  It is therefore the approach I suggest building upon.

This design is not intended to be entirely anti-government.  Given the amount of cyber-espionage a framework may be of interest to governments trying to assure the security of their own communications.

Vulnerabilities of SSL


SSL has several known vulnerabilities in its design and architecture.  Many of these are the result of basic design tradeoffs in the protocol.  I think that the risk model has shifted (in favor of very large organized crime and surveillance by well funded governments including China and the United States) make the world very different.

The threat model that SSL is designed to address is where you are protecting a large number of low- to mid-value targets against relatively casual or trivial attacks.  As attacks by foreign and domestic governments and organized criminals have become more sophisticated, the basic structure of how we use SSL and TLS has not evolved at the same pace.

The two major vulnerabilities of SSL are:

  • Large, central certificate authorities present single points of attack, and
  • Man in the middle attacks are possible provided that there are no expectations about certificate authorities on both sides.
My proposal below focuses on broadening the threat type against which security is provable.  This is further not inconsistent with existing approaches regarding central certificate authorities, but adds a layer of diachronic protection as well.

I do not believe this is particularly useful for criminal organizations and it poses problems when moving down to the individual level (these can be solved however).  Nonetheless it should give corporations and governments and even individuals an ability to ensure that their communications are not being unlawfully reviewed without their knowledge.

Solution to Third Party CA Vulnerability


Third party CA's are a current vulnerability of the SSL system.  If keys are registered or obtained through a third party, their processes or systems can be attacked to get bogus certificates or keys retrieved sufficient to forge certificates.  Third party CA's thus have to be incredibly secure.

The problem though is that no system is that secure.  When computer viruses have been planted in US drone operation centers, and spear phishing attacks have been successful against some of the most secure US government

My proposal is to divide trust between an external certificate authority and an internal certificate authority, both of which are tracked over time in the key negotiation process.  The external certificate authority's job remains to validate that the certificate is issued to the proper individual or organization (essentially an internet-based notary public), while the internal certificate authority is to identify individuals and services at an organization.  Because of the structure of the internet, and because of current practice, I would recommend keying this to the purchased domain name.

This by itself essentially means that the operational encryption keys are not certified by certificate authorities by rather by an internal tier.  In essence root certificate authorities would issue only certificates certifying things like "this domain is owned by who we issued the certificate to" while subdomains would be a local matter.

This does not interfere with provable security if the following rule is enforced:  Operational certificates for resources at a domain MUST be regarded provably secure ONLY IF they are issued by a certificate authority whose certificate indicates it was issued for the domain in question (and possibly transitively) by a trusted root authority.

Note that this means two things:  An attack on a root ca can no longer reveal keys useful in eavesdropping on key exchange, but it could reveal keys useful for carrying out a man in the middle attack.  It thus increases the effort modestly in carrying out eavesdropping on SSL-protected connections, but this is not much protection given the complexity of the attack needed to make it even an issue.  The real value comes from looking at diachronic protection against man in the middle and here is where the division really comes in valuable.

This is not limited to three levels, but it is worth noting that it would be necessary to check the certificates at each level to make sure that the above resource chain was not broken.  A fourth level might be necessary in scaling down to individual consumers (who typically do not own the domains they send emails from).


Diachronic Protection against the Man in the Middle


The approach mentioned above is chiefly useful because it allows for one to track certificate authorities' and their keys over time relative to a domain.  A sudden change could indicate a man in the middle, and with mutual authentication both sides should get alarms.

I would propose extending the new certificate to require signing with the previous key as well as the current one.  A revoked and re-issued certificate would then be signed both by the certificate authority (as evidence of continuity) and by the parent certificate authority.

This means that you have strong evidence that the certificate was not only issued to someone the certificate authority is apparently willing to vouch for, but also that it was received by the holder of the private key of the previous certificate.  Now in the event that the key is fundamentally lost, and a certificate re-issued, the holder would probably want to say something.

Now, this establishes a timeline of key changes which can be tracked, and what this means is that as keys are issued, timelines diverge.  The key issued by the root CA is no longer sufficient to establish new connections without warning of a diverged timeline, meaning that connections with previously unknown parties can be evesdropped on but those who have been in previous contact will detect that something is wrong and hopefully alert their users to a possible problem.  This provides both sides an opportunity to avoid problems.

Of course an alert may just mean massive loss of information and the previous key was lost so it does not necessarily mean a man in the middle.  However unexplained ones do indicate a problem.

Active Man in the Middle Detection


Additionally this structure should enable us to do man in the middle detection for any real-time bidirectional communication.  This depends on a certificate cache for effectiveness and this allows diachronic tracking of certificates for previously known resources.  However for new connections there is a problem.

One solution is to orchestrate several additional requests for certificates from several other resources one knows about with the legitimate request occurring at a random spot.  The observer in the middle cannot determine which is the legitimate request, and so at first will not, without extensive review first, be able to even guess which ones are unknown.   This would make it particularly  difficult to eavesdrop on volumes of communications.

Additionally once one contact pair is eavesdropped on, removing the eavesdrop triggers a diachronic protection warning.

Scaling Down to the Personal


The big thing that is required for scaling this down is to recognize that an additional tier is required, because individuals often send emails through the domains of their ISP's or email providers.  In this view one would get a personal CA certificate, which would then issue one certificate for each of things like web access, email, etc.

Extensions to X509 Required


The fundamental extension required would be a way of having all relevant internal CA certificates serially in a single format, up to the root CA which could be independently verified.  Additionally there may be some others.  A reasonable overlap time may need to be specified for certificate authorities transitioning certificates to a new certificate.  The determination of reasonable policies for such transition periods is beyond the scope of this proposal, however a temporary change (before reverting to the old cert) would be definitely suspicious.

Additionally, one would need to have a Key Version List, where keys could be listed in sequence for a period of time.  This may need to be added to the certificate structure as an extension.

Limitations


Security here is provable over time only given the following assumptions:

1.  The private key has not been compromised on either end.
2.  Continuity in changes regarding private keys is known and can be shown.

The reliance on unconditional trust of root certificate authorities is reduced, though some reliance is necessary, and the effort needed to mount an attack would be higher.  However the above limitations mean that false positives for security concerns may occur in the case where keys are lost and certificates re-issued, and false negatives in the event where private keys are compromised.

In the event where authorities (in jurisdiction which allow this) subpoena a private key, they can eavesdrop on connections.  Similarly spearphishing could be used to obtain keys by organized crime.  Thus these things are outside the threat model protects against.

However the limitations are narrower and help reduce the risk that certificate authorities face, as well as enabling people to better protect their communications.

No comments:

Post a Comment