My advice to little sites: Basic Auth stops bots dead in their tracks.
The credentials have to be discoverable of course, which makes it useless for most sites, but it's not that inconvenient. Once entered, browsers offer to use the same creds on subsequent visits.
I have several apps that use 3D assets that are large enough for it to become worrisome when hundreds of bots are requesting them day and night. Not any more, lol.
krizhanovsky 12 hours ago [-]
That's a good advice, thank you.
In our approach we do our best to not to affect user experience. E.g. consider an example of a company website with a blog. The company does it's best to engage more audience to their blog, products whatever. I guess quite a part of the audience will be lost due to requirement of authentication on website, which they see first time.
However, for returning, and especially regular, clients I think that is a really simple and good solution.
gpi 19 hours ago [-]
Where can I learn more about JA5? John Althouse (What JA stands for) had not published anything about JA5 yet.
The heuristics you use are interesting, but this will likely only be a hindrance to lazy bot creators. TLS fingerprints can be spoofed relatively easily, and most bots rotate their IPs and signals to avoid detection. With ML tools becoming more accessible, it's only a matter of time until bots are able to mimic human traffic well enough, both on the protocol and application level. They probably exist already, even if the cost is prohibitively high for most attackers, but that will go down.
Theoretically, deploying ML-based defenses is the only viable path forward, but even that will become infeasible. As the amount of internet traffic generated by bots surpasses the current ~50%, you can't realistically block half the internet.
So, ultimately, I think allow lists are the only option if we want to have a usable internet for humans. We need a secure and user-friendly way to identify trusted clients, which, unfortunately, is ripe to be exploited by companies and governments. All proposed device attestation and identity services I've seen make me uneasy. This needs to be a standard built into the internet, based on modern open cryptography, and not controlled by a single company or government.
I suppose it already exists with TLS client authentication, but that is highly impractical to deploy. Is there an ACME protocol for clients? ... Huh, Let's Encrypt did support issuing client certs, but they dropped it[1].
This particular project, WebShield, is simple and it didn't take too long to develop. Basically, with this project we're trying to figure out what can be built having fingerprints and traffic characteristics in an analytic database. It's seems easy to make PoCs with these features.
For now, if this tool can stop some dummy bots, we'll be happy. We definitely need more development and more sophisticated algorithms to fight against some paid scrapping proxies.
It's more or less simple to classify DDoS bots because they have clear impact - the system performance degrades. For some bots we also can introduce the target, for the bots and the protection system, e.g. the booked slots for a visa appointments. For some scrappers this is harder.
Another opportunity is to dynamically generate classification features and verify resulting models, build web page transition graphs and so on.
This is a good point about possible blocking of ~50% of the Internet. For DDoS we _mitigate_ an attack, not _block_ it, so probably for bots we should do the same - just rate limit them instead of full blocking.
Technically, we can implement verification of client side certificates, but, yes, the main problem of adoption on the client side.
In our approach we do our best to not to affect user experience. E.g. consider an example of a company website with a blog. The company does it's best to engage more audience to their blog, products whatever. I guess quite a part of the audience will be lost due to requirement of authentication on website, which they see first time.
However, for returning, and especially regular, clients I think that is a really simple and good solution.
thank you for the reply!
You can read about JA5 at https://tempesta-tech.com/knowledge-base/Traffic-Filtering-b... .
But the thing is that the hashes were just inspired by the work of John Althouse and there is no any relation.
Unfortunately, we didn't realize what "JA" stands for at the time we were designing the feature. We will rename it https://github.com/tempesta-tech/tempesta/issues/2533 .
Sorry for the confusion.
The heuristics you use are interesting, but this will likely only be a hindrance to lazy bot creators. TLS fingerprints can be spoofed relatively easily, and most bots rotate their IPs and signals to avoid detection. With ML tools becoming more accessible, it's only a matter of time until bots are able to mimic human traffic well enough, both on the protocol and application level. They probably exist already, even if the cost is prohibitively high for most attackers, but that will go down.
Theoretically, deploying ML-based defenses is the only viable path forward, but even that will become infeasible. As the amount of internet traffic generated by bots surpasses the current ~50%, you can't realistically block half the internet.
So, ultimately, I think allow lists are the only option if we want to have a usable internet for humans. We need a secure and user-friendly way to identify trusted clients, which, unfortunately, is ripe to be exploited by companies and governments. All proposed device attestation and identity services I've seen make me uneasy. This needs to be a standard built into the internet, based on modern open cryptography, and not controlled by a single company or government.
I suppose it already exists with TLS client authentication, but that is highly impractical to deploy. Is there an ACME protocol for clients? ... Huh, Let's Encrypt did support issuing client certs, but they dropped it[1].
[1]: https://news.ycombinator.com/item?id=44018400
This particular project, WebShield, is simple and it didn't take too long to develop. Basically, with this project we're trying to figure out what can be built having fingerprints and traffic characteristics in an analytic database. It's seems easy to make PoCs with these features.
For now, if this tool can stop some dummy bots, we'll be happy. We definitely need more development and more sophisticated algorithms to fight against some paid scrapping proxies.
It's more or less simple to classify DDoS bots because they have clear impact - the system performance degrades. For some bots we also can introduce the target, for the bots and the protection system, e.g. the booked slots for a visa appointments. For some scrappers this is harder.
Another opportunity is to dynamically generate classification features and verify resulting models, build web page transition graphs and so on.
This is a good point about possible blocking of ~50% of the Internet. For DDoS we _mitigate_ an attack, not _block_ it, so probably for bots we should do the same - just rate limit them instead of full blocking.
Technically, we can implement verification of client side certificates, but, yes, the main problem of adoption on the client side.