DecoPerson 10 hours ago [-]
User asks (human) Assistant to login to their online banking and make a transfer. No problem. No digital security system can stop this (bar requiring true biometrics on every sign-in, which isn’t happening soon).

User asks Company (with human staff) to login and do the same thing. Perhaps the company is an accounting firm, a legal firm, or a “manage my company for me” kind of firm. No problem.

User asks Company which makes self-hosted business management tools to login to their online banking. Oh shit!!! This is a violation of the ToS! The Company that makes this tool is violating the bank’s rights! The user doesn’t understand how they’re letting themselves get hacked!! Block block block! (Also some banks realise that can charge a fee for such access!)

Everyone on HN sees how that last case — the most useful given how great automation is these days — should be permitted.

I wish the governing layers of society could also see how useful such automation is.

These Device-Bound Session Credentials could result in the death of many good automation solutions.

The last hope is TPM emulation, but I’m sure that TPM attestation will become a part of this spec, and attestation prevents useful emulation. In this future, Microsoft and others will be able to charge the banks a great deal of money to help “protect their customers” via TPM attestation licensing fees, involving rotation, distribution, and verification of keys.

I’m guessing the protocol will somehow prevent one TPM being used for too many different user accounts with one entity (bank), preventing cloud-TPM-as—a-service being a solution to this. If you have 5,000 users that want to let your app connect to their Bobby's Bank online banking, then you’ll need 5,000 different TPMs. Also Microsoft (or whoever) could detect and blacklist “shared” TPMs entirely to kill TPMaaS entirely.

Robotic Process Automation on the user’s desktop, perhaps in a hidden Puppeteer browser, could still work. But that’s obviously a great deal harder to implement than just “install this Chrome extension and press this button to give me your cookies.”

Goodbye web freedom, and my software product :(

ascorbic 7 hours ago [-]
There's nothing in this spec that says there needs to be a restriction of one session per TPM. There isn't even anything that forces the client to use a TPM. It just requires the client to generate a key pair, and then use that to sign challenge responses. There's no way for the server to know which TPM was used to store that private key, nor whether one was even used.
fc417fc802 6 hours ago [-]
> There's no way for the server to know which TPM was used to store that private key, nor whether one was even used.

I was all ready to disagree with you but apparently you're correct. Color me surprised.

> DBSC will also not prevent an attack if the attacker is replacing or injecting into the user agent at the time of session registration as the attacker can bind the session either to keys that are not TPM bound, or to a TPM that the attacker controls permanently.

This is a very pleasant surprise. I've grown accustomed to modern auth protocols (and other tech stacks as well) having DRM functionality baked into them where they can attest the vendor of the device or the software stack being used to perform the auth. It's become bad enough that at this point I just reflexively assume that any new web technology is hostile to user autonomy.

jeroenhd 4 hours ago [-]
> self-hosted business management tools

As long as banks are held accountable or generally blamed for people handing over their savings to foreign scammers, any kind of external access will be considered a threat. Every single time people get scammed by fake apps or fake websites or fake calls, a large section of society goes "the bank should've prevented this!!!".

Here, one particular bank is popular because of their pro-crypto stance, their high interest rates, and their app-only approach. That makes them an extremely easy target for phishing and scamming, and everyone blames the bank for the old men pressing the "yes I want to log in with a QR code" button when a stranger calls them. Of course, banks could stop scams like that, so the calls to maybe delay transferring tens of thousands for human review aren't exactly baseless, but this is how you get the situation where businesses struggle to integrate with banking apps.

There are initiatives such as PSD2, but those are not exactly friendly to the "move fast and break things" companies that you'll find on HN (because moving fast and breaking things is not a good idea when you're talking about managing people's life savings).

The TPM is used here because it's the most secure way to store a keypair like this. But, as the spec says:

> DBSC will not prevent temporary access to the browser session while the attacker is resident on the user’s device. The private key should be stored as safely as modern operating systems allow, preventing exfiltration of the session private key, but the signing capability will likely still be available for any program running as the user on the user’s device.

In other words, if a more secure alternative than TPMs comes into play, browsers should migrate. If no TPM is available, something like a credential service would also suffice.

As for TPM emulation: it already exists. Of course, TPMs also contain a unique, signed certificate from the TPM manufacturer that can be validated, so it's possible for TPM-based protocols to deny emulated TPMs. The Passkey API supports mechanisms like that, which makes Passkeys a nice way to validate that someone is a human during signup, though the API docs tell you not to do that.

stop50 10 hours ago [-]
How dull life must be without FINTS and PSD2
tigroferoce 7 hours ago [-]
EU has many issues, but sometimes it pulls nice tricks.
randall 10 hours ago [-]
Honestly what's going to happen is you're going to have an ai do this via the phone.

Anything that can be done via phone will be done via ai talking to ai.

franga2000 10 hours ago [-]
We're once again one step closer to losing whatever little autonomy we have left when interacting with online services. Why the hell did we have to put TPMs in every computer?? They bring essentially no benefit for the vast majority of users, but companies keep finding new ways to use TPM capabilities to the user's detriment.
danpalmer 10 hours ago [-]
Don't you think if banks and email providers supported this, they would be a significant security benefit to most users?

I don't think this will be a worthwhile security benefit for most sites, and comes with trade-offs, but we already have trade-offs for higher security around sensitive things like banking and email where most users need a lot of protection.

no_time 7 hours ago [-]
>higher security around sensitive things like banking and email

There are no guard rails built in to make sure this isn't used by everyone and their dog as long as it makes site automation just a bit more difficult. Also kiss goodbye to browsing the internet without a governement/bigcorp™ approved TPM.

fc417fc802 5 hours ago [-]
If you check the spec [0] you will see that unlike most new web tech this one doesn't provide any DRM-adjacent functionality. There's absolutely no technical measure in the spec that could be leveraged to force an implementation to use a TPM. If user choice is disrespected that is squarely on the implementation (ie the browser) and has nothing to do with either the protocol or the server.

Honestly this fairly simple scheme looks a lot like what I wish webauthn could have been.

[0] https://w3c.github.io/webappsec-dbsc/

no_time 3 hours ago [-]
hah you are right. They address this concern specifically in 2.1:

>DBSC is not designed to give hosts any sort of guarantee about the specific device a session is registered to, or the state of this device.

Nevermind then. Also makes it more or less useless as a security measure but atleast not outright harmful like the famous WEI proposal.

fc417fc802 3 hours ago [-]
> makes it more or less useless as a security measure

Define "security". This is incredibly useful for mitigating bearer token exfiltration which is the stated purpose. It's also the same way ssh keypairs work and those are clearly much more secure than passwords.

It's only "insecure" from the perspective of a service host who wants to exert control over end users.

Even webauthn leaves attestation as an optional thing. Even in the case that the service operator requires it, so long as they don't engage in vendor whitelisting you can create a snakeoil authority on the fly.

The main advantage this has over webauthn is that it is so much simpler.

cyberax 6 hours ago [-]
TPM does not support the proof-of-presence functionality (except for the key import in BIOS), so there's nothing stopping you from automating it.

You might need to build a custom version of Chrome that supports bypassing the user interaction requirements.

simiones 6 hours ago [-]
What is the scenario that this is supposed to help with? Allowing you to browse malicious sites slightly more securely? Slightly less chance that running malware will steal your money?
imtringued 5 hours ago [-]
The point of this is to prevent a readout of credentials from a machine so that the credentials cannot be used on another machine.
archerx 7 hours ago [-]
No TPMs are not going to stop your grandma from getting scammed over the phone.
10 hours ago [-]
londons_explore 9 hours ago [-]
I don't understand the benefit of all this complexity vs simply having the device store the cookie jar securely (with help from the TPM or secure enclave if required).

That would have the benefit that every web service automatically gets added security.

One implementation might be:

* Have a secure enclave/trustzone worker store the cookie jar. The OS and browser would never see cookies.

* When the browser wants to make an HTTPS request containing a cookie, the browser send "GET / HTTP/1.0 Cookie: <placeholder>" to the secure enclave.

* The secure enclave replaces the placeholder with the cookie, and encrypts the https traffic, and sends it back to the OS to be sent over the network.

thayne 9 hours ago [-]
The cookie jar isn't the only place the cookie could be leaked from. For example, it could be leaked from:

* Someone inspecting the page with developer tools

* Logs that accidentally (or intentionally) contain the cookie

* A corporate (or government) firewall that intercepts plaintext traffic

* Someone with temporary physical access to the machine that can use the TPM or secure enclave to decrypt the cookie jar.

* A mistake in the cookie configuration and/or DNS leads to the cookie getting sent to the wrong server.

This would protect against those scenarios.

dgoldstein0 6 hours ago [-]
That last one should largely be solved by

1) TLS

2) make your cookie __Secure- or __Host- - which then require the secure attribute.

If DNS is wrong, it should then point to a server without the proper TLS cert and your cookie wouldn't get sent.

peanut-walrus 6 hours ago [-]
Oops your developer accidentally enabled logging for headers. Now everyone with access to your logs can take over your customer accounts.
londons_explore 5 hours ago [-]
You could have similar secure handling of cookies on your server.

For example, the server could verify the cookie and replace it with some marker like 'verified cookie of user ID=123', and then the whole application software doesn't have access to the actual cookie contents.

This replacement could be at any level - maybe in the web server, maybe in a trusted frontend loadbalancer (who holds the tls keys), etc.

eddythompson80 4 hours ago [-]
Yeah, not really sure that’s simpler or even addresses the same attack vector Google’s option does.

First of all, this approach have the nice fact that now we need new TPMs capable of doing that, and even if people could update it, we will need to wait for everybody to update their TPMs. So lets wait another 10 to 15 years before we’re really sure.

Second, the attack vector google’s approach is trying to protect against is assuming someone stole your cookies. Might as well assume that someone has gained root on your machine. Can you protect against that? Google’s approach does regardless of how “owned” your machine is, yours doesn’t.

It’s not like you’re gonna hand off the TLS stream to the tpm to write a bit into it, then hand it back to the OS to continue. The tpm can’t write to a Linux tcp socket. whatever value the tpm is returning can be captured and replayed indefinitely or for the max length of the session.

So you’re back where you started and you need to have a “keep alive” mechanism with the server about these sessions.

Google’s approach is simpler. A private key you refresh your ownership of every X minutes. Even if I’m root on your machine. Whatever I steel from it has a short expiration time. It cuts down the unnecessary step of having the tpm hold the cookie too. Plus it doesn’t introduce any limitations on the cookie size

fc417fc802 6 hours ago [-]
You're still sending the key over the wire as your credential. That's bad design plain and simple. If you want symmetric crypto there's preshared keys where the keys never go over the wire. If you need more than a single point-to-point between two parties then there's asymmetric cryptography.

Ironically the design you propose, juggling headers over to a secure enclave and having the secure enclave form the TLS tunnel, is significantly more complex than just using an asymmetric keypair in a portable manner. That's been standard practice for SSH for I don't even know how long now - at least 2 decades.

Oh also there's a glaring issue with your proposed implementation. The attacker simply initiates the request using their own certificate, intercepts the "secure" encrypted result, and decrypts that. You could attempt mitigations by (for example) having the secure enclave resolve DNS but at that point you're basically implementing an entire shadow networking stack on the secure enclave and the exercise is starting to look fairly ridiculous.

Thorrez 7 hours ago [-]
So every TLS connection, both the handshake and all the subsequent bytes need to be routed through the TPM? That sounds like it'll be slow.

Additionally, the TPM will now need to have a root store of root CAs. Will the TPM manufacturer update the root store? Users won't be able to install a custom root CA. That's going to be a problem, because custom root CAs are needed for a variety of different purposes.

When a user gets an HTTPS certificate error, now it'll be impossible for the user to bypass it.

fc417fc802 5 hours ago [-]
> When a user gets an HTTPS certificate error, now it'll be impossible for the user to bypass it.

According to BigTech that's a feature, not a bug.

nicce 9 hours ago [-]
Is there a latency problem with current HSM implementations? That sounds like a lot of computation for arbitrary data which is now mostly done by CPU.
rezonant 8 hours ago [-]
I'm not sure that's simpler than what Google is proposing here
hamburglar 7 hours ago [-]
Using a public key mechanism means you can have a system where there is literally no interface that allows you to extract the sensitive parts. A very secure cookie jar still requires you to take the secrets out of it to use them.
imtringued 5 hours ago [-]
I don't know why you threw the baby out with the bathwater. The problem is that you want the cookies to be short lived and device bound because someone might intercept your e.g. JSESSIONID or if they can't read it, they might inject their own JSESSIONID through cross origin requests somehow.

Binding a session cookie to a device is pretty simple though. You just send a nonce header + the cookie signed with the nonce using a private key. What the chrome team is getting wrong here is that there is no need for these silly short lived cookies that need to be refreshed periodically.

rkagerer 8 hours ago [-]
Leverages TPM-backed secure storage when available

Step 2: TPM required, and your cookies are no longer yours.

I actually like the idea as long as you hold the keys. Unfortuately, the chasm to cross is so small that I can't see this ending in a way beneficial for users.

djrj477dhsnv 11 hours ago [-]
Hell no. If I can't make a full (encrypted) backup of my entire device and restore it on different hardware, I don't want it.
danpalmer 10 hours ago [-]
What's the use-case for restoring session authentication state with an external service as part of that? You have the creds, and the session will expire in somewhere between 10 mins and maybe a week from the backup (for sites that need this security). I doubt you'll be restoring within the session timeout of most online banking.

I get the benefits of restoring a full backup, but in this instance it would seem to lose practical security benefits for theoretical purity.

pabs3 9 hours ago [-]
If I remove my drive from a dead computer and put in a spare one, it should boot up in the same state, including cookies in the browser. With a desktop computer and SSDs that could easily happen within the banking timeout. With Linux it is trivial to do as well.
ascorbic 7 hours ago [-]
Wait, so your use case here is that you login to online banking and while you are paying your bills or whatever your computer dies, you pull the drive from the computer that just died and put it into the new computer, boot it back up all within 10 minutes, and then expect to still be logged-in? That seems exceptionally unusual, and logging into one account seems a small inconvenience compared to replacing your entire computer. tbh I'd be amazed if it even works now. Does Linux restore the complete memory state of a dead computer when you install the drive in a new machine?
fc417fc802 5 hours ago [-]
> Does Linux restore the complete memory state of a dead computer when you install the drive in a new machine?

Cookies are generally persisted to disk in one of your browser's many caches.

danpalmer 8 hours ago [-]
Personally I have more use for protection against session theft than I do for moving a drive to another computer and continuing to use the same online banking session within 10 minutes. I suspect most people are in the same category.
imtringued 5 hours ago [-]
I have to log into my work accounts every single day. Having to login again on a new computer hardly sounds like a burden.
fc417fc802 5 hours ago [-]
The entire point of this scheme is that sessions would no longer need to be expired so aggressively. The bearer tokens remain short lived but the asymmetric key model means leaking the underlying session credential is much more difficult.
DCKing 7 hours ago [-]
The opsec reason I use Safari as a work browser today is that Safari has a much more blunt tool to disrupt cookie stealers: Safari and macOS do not permit (silent) access to Safari's local storage to user level processes. If malware attempts to access Safari, its access is either denied or the user gets presented a popup to grant access.

I wish other browsers implemented this kind of self protection, but I suppose that is difficult to do for third party browsers. This seems like a great improvement as well, but it seems this is quite overengineered to work around security limitations of desktop operating systems.

ezst 6 hours ago [-]
Seems like a very weak mitigation, if this is to protect against malwares running in your user session, alongside your browser. Can't they already do all kinds of nefarious keylogging/screen recording/network tracing/config file editing enabling impersonation and so on?

I mean, if my threat model starts with "I have a mal/spyware running alongside my browser with access to all my local files", I would pretty much call it game over.

DCKing 5 hours ago [-]
> I mean, if my threat model starts with "I have a mal/spyware running alongside my browser with access to all my local files", I would pretty much call it game over.

This is a big problem I have with desktop security - people just give up when faced with something so trivial as user privileged malware. I consider it a huge flaw in desktop security that user privilege malware can get away with so many things.

macOS is really the only desktop OS that doesn't just give up when faced with same user privileged malware (in good and bad ways). So there it's likely a good mitigation - macOS also doesn't permit same user privileged processes to silently key log, screen record, network trace and various other things that are possible on Windows and common Linux configurations.

ezst 2 hours ago [-]
Yeah, I'm siding with the sceptics on this one. Adding more layers of indirection against those malware running under a user session seem like a good idea in general, but in practice, you showed how ineffective the macOS approach is: under this model, every application is let to defend itself in an ad-hoc and specific manner. That doesn't generalise well: you can't expect every software, tool, widget, … vendor to be held to the same level of security as Apple.

Another approach is to police everything behind rules (the way selinux or others do), which is even better in theory. In practice, you waste a ton of time bending those policies to your specific needs. A typical user won't take that.

Then there is the flatpak+portal isolation model, which is probably the most pragmatic, but not without its own compromises and limitations.

The attitude of trusting by default, and chrooting/jailing in case of doubt probably still have decades to live.

fourfour3 5 hours ago [-]
On macOS, basically all of these are extra permissions that you have to grant to an application - you'll get prompted with a popup when they try to do it.

eg: local network access, access to the documents and desktop folder, screen recording, microphone access, accessibility access (for keylogging), full disk access, all require you to grant permission

londons_explore 9 hours ago [-]
Are there really many web services where an attacker having long-lived access gives them much more power than short lived access?

If someone gets short lived access to a control panel for something, there are normally ways to twiddle settings to, for example, create more user accounts, or slacken permissions.

If someone gets short lived access to a datastore, they can download all the data.

etc.

fc417fc802 5 hours ago [-]
Not more power in the sense of greater access but nonetheless gaining persistence is a huge advantage for an attacker.

In the case of bearer tokens there are many cases where attackers have managed to steal them without achieving full device compromise. Since it's literally sending the key in plaintext (horribly insecure) all it takes is tricking the client software into sending the header to the wrong place a single time.

thayne 9 hours ago [-]
One way you could potentially combat that is to make it so that a single short lived token isn't enough to accomplish more dangerous tasks like that.

Many sites already have some protections against that by for example requiring you to enter your password and/or 2fa code to disable 2fa, change privacy settings, update an email address, etc.

modeless 9 hours ago [-]
> Even if session cookies are stolen, they cannot be used from another device.

This seems false? Given the description in the article, the short lived cookie could be used from another device during its lifetime. Having this short lived cookie and having the browser proactively refresh it seems like a bad design to me. The proof of possession should be a handshake at the start of each connection. With HTTP3 you shouldn't need a lot of connections.

thayne 9 hours ago [-]
Right. The idea is that the short lived cookies would have a very short lived expiration, so even if you get access to it, it isn't very useful.

> The proof of possession should happen at the start of each connection. With HTTP3 you shouldn't need a lot of connections.

That could possibly be workable in some situations, but it would add a lot of complexity to application layer load balancers, or reverse proxies, since they would somehow need to communicate that proof of possession to the backend for every request. And it makes http/3 or http/2 a requirement.

fc417fc802 4 hours ago [-]
I think imitating TLS (and who knows how many other protocols) by coupling the asymmetric key with a symmetric one instead of a bearer token is the obvious upgrade security wise. That way you could prove possession of the PSK with every request, keep it short lived, and (unlike bearer tokens) keep it hidden from callers of the API.

That said, the DBSC scheme has the rather large advantage that it can be bolted on to the current bearer token scheme with minimal changes and should largely mitigate the current issues.

fc417fc802 5 hours ago [-]
I'm curious why the solution here is bearer tokens bound to asymmetric keys instead of a preshared key model. Both solutions require a new browser API. In either case the key is never revealed to the caller and can potentially be bound to the device via a hardware module if the user so chooses.

Asymmetric crypto is more complex and resource intensive but is useful when you have concerns about the remote endpoint impersonating you. However that's presumably not a concern when the authentication is unique to the ( server, client ) pair as it appears to be in this case. This doesn't appear to be an identity scheme hence my question.

(This is not criticism BTW. I am always happy to see the horribly insecure bearer token model being replaced by pretty much anything else.)

userbinator 11 hours ago [-]
Are they slowly trying to sneak in WEI again?
matt123456789 11 hours ago [-]
I'm sure that the business case for it hasn't gone away, but unless they can side-channel some information out of the TPM, this proposal doesn't appear to give the server the ability to uniquely identify a visitor except through the obvious and intended method. So: maybe, but this appears to be separate.
nicce 10 hours ago [-]
I wonder what this means

> Servers cannot correlate different sessions on the same device unless explicitly allowed by the user.

I read it like browser can always correlate public/private key to the website (it knows if there is authenticated tab/window somewhere).

Why they are making this possible, if you could store the information in random UUID and just connect it to the cookie? What is the use case where you want to connect new session instead of using the old one?

fc417fc802 4 hours ago [-]
It means that it works the same way that first party cookies already work. HN can't see my Google cookies and vice versa. If I clear my cookies Google has no way to know (aside from fingerprinting and maybe IP) that I'm the same person.

> What is the use case where you want to connect new session instead of using the old one?

Multiple accounts? Clear cookies and visit the next day? Probably other stuff as well. The import point is that DBSC doesn't itself increase the ability of website operators to track you beyond what they can already do.

djrj477dhsnv 10 hours ago [-]
It promotes the idea of needing a TPM to browse the modern web. Once people are used to that, it makes WEI an easier sell.
thayne 8 hours ago [-]
It doesn't require a TPM though. It just says it CAN use one, if one is available. If it is changed to require a TPM though, then that will be a problem.
sylos 10 hours ago [-]
What is wei
28 minutes ago [-]
dns_snek 6 hours ago [-]
11 hours ago [-]
remus 7 hours ago [-]
I wonder how long these short lived cookies actually live for? From the article it sounds like chrome makes a request to the server every time it has to generate a new short lived cookie, so if they do have very short lives (say a few minutes) chrome could be making a lot of requests to your server to generate new cookies.

Ed: reading a bit more closely it sounds like the request is more of a notification and actually all the real work happens in the user's browser, so you could presumably ignore it and hope the generated bandwidth to your server is pretty low.

mmastrac 10 hours ago [-]
mTLS but not mTLS. These Google standards are always so half-baked.
mmis1000 9 hours ago [-]
Shouldn't webauthn can do this already? why a separate proposal to do this again?
agl 9 hours ago [-]
WebAuthn protects the sign in, but malware can still steal the resulting cookies. DBSC protects the sign in _session_. (It should stand for Don’t Bother Stealing Cookies.)
mmis1000 9 hours ago [-]
If you read the proposal carefully. this api is used to refresh/revalidate extremely short lived cookie. not replace cookie itself. Which you can already do with webauthn
nicce 9 hours ago [-]
Maybe there is an assumption that this is easier to push through for masses because the UX is better. (no phone, no physical key required)
ximm 6 hours ago [-]
Webauthn always requires a user presence check though.
mmis1000 5 hours ago [-]
Seems the whole proposal exists solely because they are unwilling to add a "silence" option to webauthn. I am confused about the decision though.

https://github.com/w3c/webauthn/issues/199#issuecomment-2669...

fc417fc802 4 hours ago [-]
Webauthn is significantly more complicated and conceptually structured around the use of authenticators. DBSC is a rather simple challenge-response scheme that can be bolted on to things that already exist in order to mitigate bearer token exfiltration. Even though they both use public keys the two things solve (slightly) different problems.

Importantly, the presence of attestation in webauthn could potentially compromise privacy or user choice in certain cases. DBSC has zero support for that.

You could certainly use a webauthn credential to establish a DBSC session though.

thayne 8 hours ago [-]
It seems like this requires you to have very high availability for the refresh endpoint. If that endpoint is unavailable, the user can end up being effectively logged out, which could lead to a confusing, and frustrating experience for the user.