Codesealer FAQ – All you need to know

Codesealer Products

This section contains frequently asked questions and answers about Codesealer Products.

Fundamentally, Codesealer is an extremely strong and easily deployable security product to eliminate threats from malicious man-in-the-middle and man-in-the-browser attacks for your web-based services.

Codesealer is patented and Gartner recognized. We have protected more than 55 Billion sessions and never been compromised.

Our technology is based on advanced cryptography, cutting-edge dynamic obfuscation and a unique method for content delivery. Together they provide the ultimate user-interface protection. The never compromised technology is utilized in all our products.

Read more here(Core) and here(Cover), where you can also access a live demosite, download a test version of Core or find technical information.

Codesealer Core

Codesealer Core takes a piece of JavaScript, and delivers it using our proprietary mechanism.

How is the target script delivered?

Codesealer Core establishes a secure communication channel, transfers the script, and if all checks pass, execute it.

How does Codesealer Core protect itself?

Codesealer Core has several levels of protection:

  1. Advanced tamper detection, leveraging browser details that cannot be simulated by an attacker, to verify the environment it runs in.
  2. Powerful dynamic obfuscation, making every instance of Codesealer Core unique. The techniques involved render automatic deobfuscation infeasible.
  3. Dynamically generated protocol and logic details, meaning that an adversary cannot just manually deobfuscate a single instance, as the knowledge gained does not apply to the next instance.
  4. Very short lifetimes of instances, meaning that even if all mechanisms are navigated around, all this must be done manually and uniquely for each attack attempt, and at such speed that it becomes entirely infeasible.

Tamper detection? Can’t the attacker just change the `if (tamperedWith)`?

Our tamper detection is augmented by our dynamic obfuscation, and rather than conditions for attack, play part in our crypto protocol which is also generated by the dynamic obfuscator.

An adversary will not be able to find the right answer to these calculations, as this is unknown to Codesealer Core itself, and only a single try is afforded before the instance is invalidated. The odds for guessing it right for just one out of many tamper detection points, in this single try they have available, is 1:340282366920938463463374607431768211456. Good luck?

Obfuscation? Isn’t that security by obscurity?

All security can be rated by the time it takes to bypass: A door lock may take seconds, and a vault may take hours. True cryptography, if handled correctly, takes just a few billion years more than most are willing to wait to break.

Obfuscation will be broken in weeks, or maybe even just days to break for a particularly skilled individual. We make it so that you have anywhere between minutes to seconds before it was all for naught. The adversary may have found our key, but the locks are changed before they ever get near them.

What is the overhead of Codesealer Core?

Codesealer Core increases the startup time of an application slightly, primarily due to the additional cryptography. However, it does not have any performance impact on a running application.

Does Codesealer Core protect my running application? / Does Codesealer Core protect HTML/CSS/images/…?

Codesealer Core does one thing, and does it well: Deliver scripts securely.

For complete protection of web applications, see Codesealer Cover.

Does Codesealer Core obfuscate my application?

Codesealer Core obfuscates its own client, and transfers your application over an in-js encrypted tunnel, making the transfer unreadable even in the Network tab of the browser. However, it does not apply obfuscation to your application code, in order to ensure full compatibility.

If you wish to add obfuscation on top of Codesealer Core, you can apply an obfuscation step before delivering the script to Core.

How was Codesealer Core created?

Codesealer Core was created from a need within Codesealer Cover. In order for Codesealer Cover to maintain its security guarantees, there must be a guarantee that it is delivered intact.

After having an internal solution for some time, we found that customers were interested in using this technology themselves, and so Codesealer Core was born as a standalone product that Codesealer Cover consumes.

Codesealer Cover

Codesealer Cover is a complete web application protection solution, protecting from start to finish with DOM protection, network encryption, and more.

Codesealer Cover is comprised of a proxy backend server, and a secure client that runs in the end-users web browser.

How does Codesealer Cover protect a web application?

When you use Codesealer Cover, the web application loaded in the end-user browser is replaced by first the Codesealer Core client. The Core client will detect various forms of browser tampering, establish a secure connection and if safe, initiate the Cover client.

The execution of the final web application is done within the Cover sandbox. The sandbox hides the web application, monitors DOM interaction to detect changes made by external factors, and takes over all network communication from the web application.

What communication does Cover protect?

Cover protects all HTML, JS, CSS, AJAX and WebSocket communications. Certain media types, such as videos, are served in a regular fashion due to browser limitations.

What is the overhead of Codesealer Cover?

As Codesealer Cover uses Codesealer Core, the performance considerations of Core apply: That is, a startup overhead is present.

As Cover protects a web application actively, there is also an overhead to application execution. Notably, all network transfers are a bit more work.

What does the Codesealer Cover server do?

The first task of the Codesealer Cover server, if the requested site is configured as protected, is to initiate Codesealer Core, and deliver the Codesealer Core and Cover clients to the end-user.

After this, the primary responsibility of the Cover server is to translate our secure, proprietary protocol back to “regular” HTTP(S) requests to your existing backend infrastructure.

Can an attacker make requests directly to the backend behind Cover?

No, a site protected by Codesealer Cover will only pass requests to the backend if they are requested using our secure, proprietary protocol. A “regular” request will simply deliver a Core and Cover instantiation sequence, which an attacker has no use of.

This makes it hard for an attacker to probe, experiment with or script API endpoints.

Can an attacker bypass the client and talk directly to Cover?

Codesealer Cover uses the dynamic protocol from Codesealer Core, to ensure that an attacker cannot initiate a session without first reverse-engineering the most recent Core instance.

As these instances are only valid for a few minutes, and considering that an attacker must start all over if they did not finish the job within the validity of the instance, the attack required to instantiate a session becomes infeasible.

Does Codesealer Cover protect against SQL injections?

As an attacker cannot probe API endpoints outside Cover, discovering and using SQL injections are made harder. However, we do not specifically detect SQL injections, so if a user-facing form is vulnerable, an attacker may be able to perform an SQL injection through Cover.

Does Codesealer Cover protect against Cross-Site Request Forgery?

Many forms of CSRF are impossible for Codesealer Cover protected sites, as one must initiate a full session before a request can go through. An tag, for example, cannot be used to make a CSRF GET request to a protected site.

However, we do not explicitly protect against CSRF.

What happens when Codesealer Cover detects DOM modifications?

First, a forensic report of the incident is generated. What happens afterwards is configurable, and can be either:

1) Terminate the session and redirect the end-user to an error page chosen by the customer.

2) Terminate the session with no action, making what happened less obvious for an attacker.

3) Do nothing, attracting minimum attention from attackers while still gathering forensics on the backend.

How are client-side forensics collected on the server?

Forensic information can either be sent in a dedicated message, or “piggy-back” other required messages such as end-user requests, as configured by the customer for each incident type.

The former gives immediate feedback, whereas the latter obscures the transfer so that the attacker will not discover that forensic information is embedded, and even if known, it cannot be blocked as doing so would block a user action.

How do I use forensic reports from Codesealer Cover?

Codesealer Cover stores forensic reports on the server, and can either display them in the associated Dashboard, simply deliver them as bundles, or inform a SIEM solution of the incident.

These bundles will include information about the incident, such as the HTML document before and after the incident occurred for DOM modification attacks.

Codesealer Dynamic Obfuscator

The Codesealer Dynamic Obfuscator provides strong obfuscation of JavaScript, going far beyond what common obfuscators or minifiers are capable of.

What does the Dynamic Obfuscator do?

The Codesealer Dynamic Obfuscator reshuffles code, renames variables and fields while ensuring maximum naming reuse to minimize readability, obfuscates constants and strings, inserts dead code and honeypots to distract, inserts dynamically generated functionality such as the Core dynamic protocol, and more.

Why was the Dynamic Obfuscator created?

The Dynamic Obfuscator was created to protect Codesealer Core and Codesealer Cover.

How does the Dynamic Obfuscator protect Codesealer Core?

First, by inserting a dynamic protocol into every Codesealer Core instance, we ensure that each individual client must be reverse engineered before it can be attacked. Second, by dynamically and heavily obfuscating each instance, we ensure that reverse engineering will be manual and slow. And last, by having instance lifetimes that are very short, we ensure that it is infeasible to reverse engineer a Core instance before it is rendered invalid.

Codesealer Dashboard

The Codesealer Dashboard is a web application that allows for Cover monitoring and forensic report management.

Codesealer Controller

The Codesealer Controller is a central coordinator between multiple Codesealer Cover instances, used to share session details for non-sticky load balancers, centralize forensic data collection, and more.


This section contains short, simple and concise descriptions of various cryptographic constructs, many of which have relevance through implementation in Codesealer Products. This is intended to aid in understanding and answering technical enquiries, and in general to aid understanding of cryptography.

Crypto algorithms used in Codesealer products

Overall transport

TLS (version 1.1 to 1.3), if enabled, is used for all communication on top of in-JS cryptography.


Rabbit, which uses 128-bit keys, is used for our in-JS encryption. This choice is made for performance.

Authentication (MAC)

Badger is a MAC based around Rabbit, so they go hand in hand for a performant authenticated encryption solution.


Conventional Diffie-Hellman is used for the key exchange, with reasonably sized values of p.

Pseudo-random number generator (PRNG)

We use Rabbit’s PRNG mode here.


A process that takes arbitrary content, and makes a small but very unique “fingerprint” of the content. Hashes have many uses, including to check content integrity by comparing against a known fingerprint, or to compare content against each other by comparing their fingerprints.

A hash on its own does not guarantee that the content and its hash was created by someone holding a key. For such protection, see Authentication.

Algorithms: SHA3, SHA2, SHA1, Keccak, MD5, etc.

Symmetric-key algorithm

An algorithm uses symmetric keys if all operations (e.g. encryption and decryption) uses the same shared key.

Anyone with the key can create, use or validate content all the same.

Asymmetric-key algorithm (Public-key cryptography)

An algorithm uses asymmetric keys if all operations (e.g. encryption and decryption) use different keys, usually split into private (non-shared) and public (shared) keys.

This is often in systems where those that need to use or validate content must not also be able to create content. Signing is such a system.

Key Exchange

A process that allows two parties to agree on a new, random secret key in a secure fashion that does not allow anyone else to know the key, even if they eavesdropped on the communication between the parties.

Algorithms: Diffie-Hellman (DH), Elliptic-curve Diffie-Hellman (ECDH), SRP, SPEKE, etc.

Key Derivation Function (KDF)

A process that allows one to create something usable as a symmetric key from something like a user password. A user password is usually far too short, and lacks entropy.

These functions may be intentionally very slow, to make it impractical to brute-force what the’ password was given the output key.

Algorithms: PBKDF2, HKDF, Argon2, scrypt, etc.

Nonce / Initialization Vector (IV)

A “nonce” or “initialization vector” is a key that is either randomly generated or an incrementing sequence number that is used together with the secret key for cryptographic operations, which  is written into the encrypted data, and is thus publicly known.

Without a nonce or initialization vector, using a secret key for encryption twice would be unsafe, as encrypting different things with different keys could reveal information about the key.

Adding something random to the key, even if publicly known, ensure that the combined key is never the same, and thus reusing a secret key is not unsafe as long as the nonce or initialization vector remains unique. Therefore, most encryption algorithms will use a nonce of IV.

Pseudo-Random Number Generator (PRNG)

A process that allows one to take one random number (called the “seed”), and extend it to however much random data is required. Commonly used to generate keys for other algorithms.

Vulnerabilities in PRNGs such as bias or patterns, as well as bad seeds, usually leads to keys that can be predicted entirely or partially by an adversary.

Algorithms: Mersenne Twister, Xorshift, etc.


A process that makes something illegible by applying a secret key to the entire content. to those not having the key. Decryption reverses the process.

Note that both encryption and decryption blindly apply a key. If you decrypt something with a wrong key, or something that was not encrypted in the first place, it will succeed and give you gibberish.

User-provided passwords are never used as keys directly. A Key derivation function is used to make user-input usable.

Encryption does not guarantee that content is intact and made by someone holding the key. Encryption is usually combined with Authentication for such protection.

Algorithms: AES, ChaCha20, 3DES, Blowfish, Rabbit, etc.

Authentication (MAC, Message Authentication Code)

A process that allows those holding the key to validate that the content is intact, and made one of the parties holding this key. Commonly used by combining a hash with a key, in which case it is called a HMAC (Hash-based Message Authentication Code).

Authentication does not make content illegible to those not holding the key. Authentication is usually combined with Encryption for such protection.

Authentication does not allow one to know which party holding the key created the content. See Signing for such protection.

Algorithms: HMAC-SHA-256 (HMAC), Poly1305 (UMAC), Badger (UMAC)


A process that allows others’ to validate that content is intact, and made by a specific individual.

Signing is similar to authentication, but uses asymmetric-key cryptography instead of symmetric-key cryptography. This means that you sign with a private key that is not shared, and others’ validate using a public key that you have given them.

Signing requires the public keys to be exchanged in a trusted manner, similar to how a physical signature has no value if one does not know how it is supposed to look. Public-key Infrastructure and certificates is one way to do this.

Algorithms: RSA

Authenticated encryption (AE or AEAD)

A combination of encryption and authentication in a single algorithm or compound algorithm.

Algorithms: AES-GCM, AES-CCM, ChaCha20-Poly1305

TLS (formerly SSL)

Transport Layer Security (formerly Secure Sockets Layer) is a protocol suite that provides encryption, authentication, key exchange, identity validation (through public-key infrastructure), forward secrecy, and others. It supports several algorithms and negotiates version and supported features. It is the goto solution for secure communication, and is what is used by browsers during HTTPS communications.

SSL was developed at Netscape, with SSL 3.0 released in 1996. TLS 1.0 was released in 1999 as an upgrade over SSL 3.0.

Issues with TLS

One of the primary weaknesses in TLS is the excessive options, many of which are insecure yet popular, as well as backwards compatibility to older, insecure versions.

An example of issues from misconfiguration is the POODLE attack in 2014, where many servers were configured to allow clients (and adversaries) to ask for SSL 3.0, which is horribly broken. Financial institutions are also famous for intentionally configuring TLS for weak security in order to be able to internally break the protection in the name of “compliance” (not necessary).

TLS implementations are also complicated, and often erroneous. OpenSSL is famous for issues such as Heartbleed, also found in 2014.

Identity validation is based on public keys and certificates, but the certificate authorities trusted by default on a computer includes government-owned institutions from many countries, including several from the U.S. and China. While some countermeasures are in place, in order to trust the “padlock” in a browser for a website one accesses, one must trust that all the certificate authorities on the machine would not authorize an adversary to lie about their identity.

These issues have all-in-all given TLS a somewhat bad rep, but it remains in place as the de facto choice for off-the-shelf network protection.


This section gives explanations for some general security concepts, both in the form of attacks and protection mechanisms against these. This is meant to help understanding the market we are in.

Protection mechanisms


A protection method where an application has very limited access to resources so that damage is contained if it becomes compromised.

The use of the term “sandbox” in Codesealer Cover is a little backwards.

WAF (Web Application Firewall)

A WAF is a type of HTTP proxy that looks for certain predefined attack patterns. An example of such a pattern would be a form field designed for a number suddenly receiving a long string with special characters, which might indicate an SQL injection attempt.

Most types of attacks that a WAF would protect against are not applicable to decent, modern code. The purpose of the WAF is mostly to cover old or sloppy code. They are commonly deployed to simplify PCI DSS certification, where its presence regardless of effectiveness and relevance allows one to skip a manual inspection step.

MFA (Multi-factor authentication)

An authentication mechanism where more than one “factor” is needed to log in, such as a password (knowledge), the ability to read texts from a specific cell-phone number (ownership), or a specific fingerprint (biometrics).

The other factors may be unknown to the end-user, such as in the case of behaviour analysis.

Attack types


A malicious application, usually sold as a product on the black market and deployed by whoever purchased it, rather than being used by the author itself.

New malware are usually first used for well thought out attacks by more experienced organizations, and only at the end of its life will it make its way down to the script-kiddies that attack everyone and everything. Protection from anti-virus come somewhere in the middle of its life.

Depending on the exact type, malware is also called: virus, worm, trojan, spyware, adware and others.

Brute-force attack

An attack type where every possible input is tried in sequence (possibly randomized). The size of keys for encryption (e.g. AES-128’s 128-bit key) directly affects how many keys exist and thus have to be tried. Brute-force is only practical against local copies of data as each attempt must be very fast, and even then only practical against short keys.

Attacks against cryptographic algorithms usually use a brute-force attack as benchmark for practicality: An attack only has practical value if it takes fewer operations than just trying every key, but attacks are published for theoretical value regardless.


This section explains web technologies and concepts. This is relevant to understand product compatibility, and to have a general understanding of technologies utilized in Codesealer.


HTTP (Hyper-text transport protocol)

A text-based protocol capable of downloading and uploading content, with a bit of metadata.

HTTPS (Hyper-text transport protocol secure)

HTTP over TLS.


HTML (Hyper-text markup language)

HTML describes the content and structure of a web page. HTML can include text, images, video, and more. Styling can be done with cascading style sheets (CSS), either within the document or as external files.

While HTML is static, it is possible to load JavaScript, which can change the content of a web page dynamically at runtime, regardless of the content of the HTML file.

CSS (Cascading style sheet)

CSS describes advanced rules for changing the appearance of content. Placement, colors, donts, gradients, animations, backgrounds, etc. can all be handled by stylesheets. It is a core component in anything but the simplest plain text web pages.

While CSS is static, it is possible for JavaScript to dynamically change CSS at runtime, regardless of the content of HTML or CSS files.


JavaScript is a scripting language initially designed on the back of a napkin at Netscape to make simple web pages slightly interactive. The language has since grown out of control to be a general purpose programming language used for everything, everywhere by those that like it.

The primary use of JavaScript is to make advanced web applications, such as Google Docs or trading platforms, where the web page must be heavily interactive.

JavaScript has a bad rep for being slow and overused, as many basic web pages such as news sites are written like an advanced web application, where all the extra overhead of starting and running a web application provides no additional value over a static HTML document.

JavaScript has nothing to do with Java. Its name was picked to ride the popularity of Java at the time. The formal name of the language specification is ECMAScript.

DOM (Document object model)

The DOM is the interface used by JavaScript to modify the document currently seen by the user. The current value of the DOM can be seen by right-clicking on a web page and selecting Inspect Element. It is commonly visualised as HTML for simplicity.

To begin with, the DOM contains only the content of the initial HTML document.

Website types and terms

Web page

A specific page on a website, e.g. the initial page. That is, a single HTML document. A web page is usually a simple, non-interactive content-delivery page, as opposed to an interactive web application.


A collection of web pages, e.g. Usually used for something less interactive/complex than a “web application”, and usually for “simple” content delivery.

Web application

An interactive website with advanced functionality, e.g. a trading application, Google Docs, or similar.

A web application usually includes both client and server components. Some years ago, most of the functionality was implemented on the server, but heavy JavaScript applications are the norm in this day and age.

Single page application

A website or web application that exists entirely on a single web page (i.e. HTML document). The alternative is a website or application that is split over several web pages, each implementing only a subset of the functionality.

Whether an application is single-page is mostly invisible to the user. The difference is that a Single-page application loads slower initially, but is commonly faster later on, and that the server does not involve itself in how the page looks.

Most modern web applications are single page applications.

Server-side application

An application where the server sends HTML to the client every time the document should change appearance. This is generally considered an older technique.

The benefit is that initial loading time is fast, and few client-side resources are required. The downsides include that all requests transfer the entire page, and that page load does not look “pretty”.

Client-side application

An application where an entire application is sent to the client, which then communicate over an API with the backend. This is generally considered the “modern” way to make things, irrespective of whether it makes sense to do so.

The benefit is that API requests can be very small, that page load can be “pretty”, and that interactivity can generally be greater. The downside is that initial load is much slower, that more client-side resources are required (burning battery), and that it is more complex and error-prone.


Read more                    Get in touch