Category: security

How to create simple encrypted remote backups

30.03.2021 yahe administration linux security

Every once in while I get asked if a certain backup scheme is a good idea and oftentimes the suggested backup solution is beyond what I would use myself. Duplicity, its simplification Duply or the not-so-dissimilar contenders Borg and Restic are among those solutions that are mentioned most often, with solutions like Bacula and its offspring Bareos coming much later.

Unfortunately, I would not trust any of these tools further than I could throw a harddrive containing a backup created with them. The reason for this is somewhat simple: In my opinion, all of these solutions are too complex in a worst-case scenario.

As soon as I mention this opinion, most people I talk to want to know how I do backups and they want to know if I ever tried those integrated solutions. Yes, I have used Duplicity for years until a system of mine broke down. I had an up-to-date backup of that system but still lost a lot of (fortunately not so important) data because the Duplicity backup had become inconsistent over time without notice. I was able to manually extract some data out of the backup, but it was not worth the time. That was the moment when I decided that I did not want to be reliant on such a software again.

1. The design goals

There are several design goals that I wanted to achieve with my personal backup solution:

  • It should be encrypted.
  • It should be easy to test the restorability of the backup.
  • It should work with off-the-shelf software that people already know.
  • It should be suitable for cost-efficient remote backups in the cloud.
  • It should work with backup targets without requiring special server software.

There also some non-goals that are not so important for me:

  • It is not required to backup live systems. A file-based backup is sufficent and there are means to prevent files from being modified during the backup process like temporary filesystem snapshots that are used as the basis for the actual backup.
  • It is not required to have versioned backups. While this would be an added bonus, having an up-to-date backup that can be restored reliably is much more important.
  • It is not required to deduplicate content. Deduplication increases complexity.

2. The encryption layer

Let us start with the encryption. For this I have chosen the FUSE wrapper GoCryptFS which is available on GitHub. It is developed by @rfjacob who was one of the most active maintainers of the well-known EncFS encryption layer back in 2015-2018. The "project was inspired by EncFS and strives to fix its security issues while providing good performance" and it looks like he achieved that goal.

Using GoCryptFS ist pretty simple. After downloading the static binary from the GitHub repository you can create the required folders and initialize a so-called reverse repository. A reverse repository takes an unencrypted source directory and provides an encrypted version of the contained files through a second directory. This way you can encrypt files in-memory on-the-fly instead of requiring additional storage space for the encrypted copy of the files. The ad-hoc encryption and decryption of GoCrypt will come in handy for restores as well.

For our purposes we will create three folders:

  • ./unencrypted will contain our source material to be backed up
  • ./encrypted will contain the ad-hoc encrypted files to be backed up
  • ./decrypted will contain the ad-hoc decrypted files of the backup
### create the local folders
mkdir -p ./unencrypted ./encrypted ./decrypted

### initialize the reverse encryption
gocryptfs --init --reverse ./unencrypted

### you can use the --plaintextnames parameter
### if the file names are not confidential
# gocryptfs --init --reverse --plaintextnames ./unencrypted

### mount the unencrypted folder in reverse mode
gocryptfs --reverse ./unencrypted ./encrypted

After initializing the reverse repository in the ./unencrypted folder you will find a new file called ./unencrypted/.gocryptfs.reverse.conf. This file contains relevant encryption parameters that are required to be able to encrypt the files. When the reverse repository is mounted into the ./encrypted folder you will find a file called ./encrypted/gocryptfs.conf which is an exact copy of the previous ./unencrypted/.gocryptfs.reverse.conf file. It is required to be able to decrypt the files again. You must not lose this file!

There are two possibilities to prevent this:

  • You can create a paper-based backup of the ./encrypted/gocryptfs.conf. As the configuration can only be used in conjunction with the corresponding password, it should be save to store this paper-based backup somewhere even if someone should be able to read it.
  • You can create a paper-based backup of the masterkey that is printed to the screen when initializing the reverse repository. However, you have to make sure that no-one can read the masterkey as it is not password-protected.

If you did not use the --plaintextnames option you should also find a new file called ./encrypted/gocryptfs.diriv. By default GoCryptFS does not only encrypt the file content but also the file name. The directory initialization vectors (dirivs) contained in the gocryptfs.diriv files are required to be able to decrypt the file names again. If you cannot risk to lose file names or if your file names are not confidential then it might be better for you to disable the file name encryption.

3. The remote access

I use a FUSE wrapper to mount the remote storage as if it were a local device. Typically your backup software would have to be able to upload files to the remote storage itself, unnecessarily complicating things. The FUSE wrapper takes this complexity away from the actual backup tool. Using a FUSE wrapper will also come in handy later on when restoring data.

Typical FUSE wrappers include:

  • davfs2 for WebDAV compatible storage
  • s3fs for AWS S3 compatible storage
  • SSHFS for SFTP storage

I personally use SSHFS as I use a remote VM with SFTP access to store my backups. This also reduces the amount of transferred data for the restore test as we will see later.

### create the remote folder
mkdir ./backup

### mount the remote storage
sshfs backup@backup.example.com:/backup ./backup

### you should use additional parameters
### if you run into problems
# sshfs backup@backup.example.com:/backup ./backup -o ServerAliveInterval=15 -o idmap=user -o uid=$(id -u) -o gid=$(id -g) -o rw

### create the remote subfolders
mkdir ./backup/checksums ./backup/files ./backup/snapshots

4. The backup process

Now we are ready to create the actual backup. For this we will use rsync which is used far and wide for such tasks and has some nice benefits:

  • By default, rsync identifies modified files through their size and last-modification date. Thanks to the FUSE wrapper the local size and last-modification date and the remote size and last-modification date can easily be compared without having to download the files. So unlike other approaches you do not need a local copy of your whole backup. (If you use an SSH server as a backup target you could also use the integrated SSH support of rsync.)
  • rsync can easily be restarted should a synchronization fail. Unlike other solutions you do not have to wonder what happens when a backup task really fails. Files are encrypted in-memory thanks to GoCryptFS and rsync just starts comparing the local and remote copy from the beginning when restarting the synchronization process.
  • Thanks to the --backup-dir= and --delete parameters rsync provides a rather simple versioning of files. Files that have changed or that have been deleted between synchronizations are moved to the provided backup directory path and can easily be accessed. If you need more storage you can delete backup directories of earlier synchronizations.
### copy over files and keep modified and deleted files
rsync -abEP "--backup-dir=../snapshots/$(date '+%Y%m%d-%H%M%S')/" --delete ./encrypted/ ./backup/files/

### you should add the --chmod=+w parameter
### if created folders in the backup target are not writable
# rsync -abEP "--backup-dir=../snapshots/$(date '+%Y%m%d-%H%M%S')/" --chmod=+w --delete ./encrypted/ ./backup/files/

5. The restore test (regularly)

You do not have a proper backup unless you have successfully tried to restore it. However, restore-testing remote backups can be ressource-intensive. The way I do it is to calculate checksums of the local files and of the remote files which are then compared to make sure that the remote copy is identical to the local copy.

### enter the encrypted directory
cd ./encrypted

### create checksums of all files
find . -type f -print0 | xargs -0 sha1sum > ../original

### copy the checksums over to the remote server
cp ../original ../backup/checksums/original

### leave the encrypted directory
cd ..

Calculating the checksums of the remote files through the FUSE wrapper is possible. Unfortunately, this would mean to download the whole backup in the background. As the checksum calculation is separated from the checksum comparison we can optimize things a bit. Given that you have SSH access to the remote target you can log into the remote server, calculate the checksums there and only transfer the checksum file to compare it with the checksums of the original files. This greatly reduces the amount of data that have to be transferred for the restore test.

### enter the backup files directory
cd ./backup/files

### create checksums of all files
find . -type f -print0 | xargs -0 sha1sum > ../checksums/backup

### leave the backup files directory
cd ../..

Comparing checksums that have been written to files has one caveat. The files might be sorted differently. You have to remember this and sort the checksum files before diffing them or otherwise you might find a lot of deviations.

### sort the checksums
sort ./backup/checksums/backup > ./backup/checksums/backup.sorted
sort ./backup/checksums/original > ./backup/checksums/original.sorted

### compare the checksums
diff ./backup/checksums/original.sorted ./backup/checksums/backup.sorted

6. The additional restore test (at least once)

The suggested restore test has one small imperfection. It may speed up the comparison of the local and remote copy, but this is only true for the encrypted files. Normally you would want to make sure that the decrypted files are identical to the original unencrypted files as well. There are two different approaches to achieve this:

  • You could download the whole backup, mount that backup via GoCryptFS and then compare the original unencrypted files and the decrypted backup. However, to do this you would have to transfer a lot of data back to your local storage and keep that second copy for the comparison.
  • Instead of calculating the checksums of the encrypted files you could calculate them of the unencrypted files. You could log into the remote target via SSH, mount the remote backup via GoCryptFS and calculate the checksums of the decrypted backup. However, to do this you would have to trust the remote target, but then the encryption would not have to be done in the first place.

The solution that I chose is a bit different: Thanks to the regular restore test I already know that the local encrypted files and the remote files are identical. That also means that the local encrypted files will decrypt to the exact same result as the remote files. So, I can just mount the local encrypted files via GoCryptFS which can then be compared to the original unencrypted files. As the encryption and decryption happen in-memory on-the-fly it is not necessary to keep a second copy of the data around.

### mount the encrypted folder in forward mode
gocryptfs ./encrypted ./decrypted

When calculating the checksums of the original unencrypted files we have to ignore the .gocryptfs.reverse.conf file as it will not be present after the decryption.

### enter the unencrypted directory
cd ./unencrypted

### create checksums of all files
### but ignore the gocryptfs config file
find . -type f ! -path "./.gocryptfs.reverse.conf" -print0 | xargs -0 sha1sum > ../unencrypted

### leave the unencrypted directory
cd ..

Calculating the checksums of the decrypted files might take a bit longer. Remember that in this case each file is read from disk, encrypted and then decrypted before calculating the actual checksum.

### enter the decrypted directory
cd ./decrypted

### create checksums of all files
find . -type f -print0 | xargs -0 sha1sum > ../decrypted

### leave the decrypted directory
cd ..

After sorting the checksum files we can finally compare them.

### sort the checksums
sort ./decrypted > ./decrypted.sorted
sort ./unencrypted > ./unencrypted.sorted

### compare the checksums
diff ./unencrypted.sorted ./decrypted.sorted

This whole process does not necessarily have to be done for each and every backup. It is primarily used to make sure that the encryption layer works as expected. After you are done, do not forget to unmount the decryption folder.

### unmount the directory
fusermount -u ./decrypted

7. The restore

Thanks to the usage of the FUSE wrapper it is pretty easy to restore files from the remote backup. For other applications the FUSE mount looks like any local storage device which means that you can also mount the remote backup directly via GoCryptFS.

### mount the backup folder in forward mode
gocryptfs ./backup/files ./decrypted

Now it is possible to browse the backup and search for the files that you want to restore. Once you found them you can just copy them over. During the copy process the files will be downloaded and decrypted in-memory on-the-fly. After you are done, just unmount the decryption folder.

### unmount the directory
fusermount -u ./decrypted

8. Closing up

There we have it. By combining some tools that do their own job we have created a backup solution that is - in my opinion - easy to understand and use. Those tools make up a solution that is better than the single parts alone:

  • We used GoCryptFS to encrypt files in-memory on-the-fly.
  • We used SSHFS to seamlessly access the remote target.
  • We used rsync to synchronize the local encrypted files to the remote target.
  • We used sha1sum, sort and diff to test the restorability of the remote backup.

Best of it all: Those tools are independent of each other. Most of them could be replaced should it become necessary. I hope that you can see the benefits of this approach to simple encrypted remote backups.

So finally, as a last step, do not forget to unmount the used folders. ūüėÉ

### unmount the directories
fusermount -u ./backup
fusermount -u ./encrypted

Cryptographic Vulnerabilities within the Nextcloud Server Side Encryption

16.11.2020 yahe publicity security

Nearly a year ago I wrote that I had an extensive look into the server side encryption that is provided by the Default Encryption Module of Nextcloud. I also mentioned that I have written some helpful tools and an elaborate description for people that have to work with its encryption.

What I did not write about at that time was that I had also discovered several cryptographic vulnerabilities. After a full year, these have now finally been fixed, the corresponding HackerOne reports have been disclosed and so I think it is about time to also publish the whitepaper that I have written about these vulnerabilities.

The paper is called "Cryptographic Vulnerabilities and Other Shortcomings of the Nextcloud Server Side Encryption as implemented by the Default Encryption Module" and is available through the Cryptology ePrint Archive as report 2020/1439. The vulnerabilities presented in this paper have received their own CVEs, namely:

  • CVE-2020-8133 went to the vulnerability described in the chapter "Insufficient integrity protection of files leads to breach of integrity (I)". More details can be found in the HackerOne report 661051 and in the Nextcloud Security Advisory NC-SA-2020-038.
  • CVE-2020-8150 went to the vulnerability described in the chapter "Insufficient integrity protection of files leads to breach of integrity (III)". More details can be found in the HackerOne report 742588 and in the Nextcloud Security Advisory NC-SA-2020-039.
  • CVE-2020-8152 went to the vulnerability described in the chapter "Insufficient integrity protection of files leads to breach of integrity (II)". More details can be found in the HackerOne report 743505 and in the Nextcloud Security Advisory NC-SA-2020-040.
  • CVE-2020-8259 went to the vulnerability described in the chapter "Insufficient integrity protection of public keys leads to breach of confidentiality". More details can be found in the HackerOne report 732431 and in the Nextcloud Security Advisory NC-SA-2020-041.

Having such an in-depth look into the implementation of a real-world application has been a lot of fun. However, I am also relieved that this project now finally comes to an end. I am eager to start with something new. ūüėÉ


Shared-Secrets: Cryptography Reloaded

17.12.2019 yahe code linux security

About 3 years ago I wrote about a tool called Shared-Secrets that I had written. It had the purpose of sharing secrets through encrypted links which should only be retrievable once. Back then I made the decision to base the application on the GnuPG encryption but over the last couple of years I had to learn that this was not the best of all choices. Here are some of the problems that I have found in the meantime:

  • The application started by using the ASCII-armoring of GnuPG to get human-readable outputs for the URL generation. Unfortunately, the ASCII-armoring introduced many possibilities to alter links and thus retrieve secrets more that once.
  • To clean up the interface to GnuPG the application was rewritten to use the GnuPG PECL extension. Unfortunately, this introduced integrity problems and was removed again shortly afterwards.
  • In 2018 the world had to learn through EFail that the integrity protection of GnuPG is actually optional. Thus, the application had to be enhanced to prevent unprotected messages from being decrypted.
  • After this problem I started to poke around GnuPG and the OpenPGP standard and learned that the message format does not support integrity protection for the actual message structure. This means that message packets can be added, moved around or removed. All of these modifications made it possible to alter links and thus retrieve secrets more than once.

As this last issue is a problem with the GnuPG message format itself its solution required to either change or completely replace the cryptographic basis of Shared-Secrets. After thinking about the possible alternatives I decided to design simple message formats and completely rewrite the cryptographic foundation. This new version has been published a few weeks ago and a running instance is also available at secrets.syseleven.de.

This new implementation should solve the previous problems for good and will in future allow me to implement fundamental improvements when they become necessary as I now have a much deeper insight into the used cryptographic algorithms and the design of the message formats.


Nextcloud-Tools: Working with the Nextcloud Server-Side Encryption

02.12.2019 yahe administration code security update

At the beginning of the year we ran into a strange problem with our server-side encrypted Nextcloud installation at work. Files got corrupted when they were moved between folders. We had found another problem with the Nextcloud Desktop client just recently and therefore thought that this was also related to synchronization problems within the Nextcloud Desktop client. Later in the year we bumped into this problem again, but this time it occured while using the web frontend of Nextcloud. Now we understood that the behaviour did not have anything to do with the client but with the server itself. Interestingly, another user had opened a Github issue about this problem at around the same time. As these corruptions lead to quite some workload for the restore I decided to dig deeper to find an adequate solution.

After I had found out how to reproduce the problem it was important for us to know whether corrupted files could still be decrypted at all. I wrote a decryption script and proved that corrupted files could in fact be decrypted when Nextcloud said that they were broken. With this in mind I tried to find out what happened during the encryption and what broke files while being moved. Doing all the research about the server-side encryption of Nextcloud, debugging the software, creating a potential bugfix and coming up with a temporary workaround took about a month of interrupted work.

Even more important than the actual bugfix (as we are currently living with the workaround) is the knowledge we gained about the server-side encryption. Based on this knowledge I developed a bunch of scripts that have been published as nextcloud-tools on GitHub. These scripts can help you to rescue your server-side encrypted files in cases when your database was corrupted or completely lost.

I also wrote an elaborate description about the inner workings of the server-side encryption and tried to get it added to the documentation. It took some time but in the end it worked! For about a week now you can find my description of the Nextcloud Server-Side Encryption Details in the official Nextcloud documentation.

Update

Due to popular demand I wrote the decrypt-all-files.php script that helps you to decrypt all files that have been encrypted with the server-side encryption. It is accompanied with a somewhat extensive description on how to use it.


Vermeiden. Erkennen. Beheben.

21.08.2017 yahe legacy security thoughts

Bereits seit einiger Zeit orientiere ich mich bei der Auswahl von Sicherheitsma√ünahmen an einer Einteilung, die ich bereits im Artikel √ľber die aktive Verteidigung gegen Kryptotrojaner einmal verwendet hatte: Vermeiden. Erkennen. Beheben.

Bisher war ich davon ausgegangen, dass es sich dabei um ein g√§ngiges Modell handelt, schlie√ülich wird es in vielen Bereichen eingesetzt, in denen es um Fehlervermeidung geht. Bezogen auf das Management der Informationssicherheit habe ich jedoch keine relevanten Quellen ausfindig machen k√∂nnen, die sich mit dieser Klassifizierung von Sicherheitsma√ünahmen besch√§ftigt. Aus diesem Grund habe ich mir √ľberlegt, einfach einmal aufzuschreiben, was ich mit den drei Punkten meine und wie deren Anwendung einem helfen kann, die Absicherung der eigenen Informationssicherheit ganzheitlicher zu gestalten.

tl;dr

Anstatt sich im risikoorientierten Sicherheitsmanagement nur mit der Frage zu besch√§ftigen, wie man Sch√§den vermeidet, sollte man in seinen Prozessen auch die Frage ber√ľcksichtigen, wie man entstandene Sch√§den erkennt und wie man diese entstandenen Sch√§den behebt.

Grundsätzliches

Sicherheitsmanagement, wie es heutzutage praktiziert wird, ist prim√§r risikoorientiert. Man baut sich einen Pool an potentiellen Bedrohungen auf; pr√ľft, ob ein spezifisches Asset Schwachstellen hat, die durch eine Bedrohung zu tats√§chlichen Gef√§hrdungen werden; bewertet das daraus entstehende Risiko und √ľberlegt sich, wie man mit diesem Risiko umgehen m√∂chte. M√∂glichkeiten sind die √úbernahme des Risikos, man lebt also damit, dass das Risiko existiert; man transferiert das Risiko auf jemand anderen, z.B. indem man eine Versicherung abschlie√üt; man vermeidet das Risiko, z.B. indem man eine andere technische L√∂sung w√§hlt; oder man reduziert das Risiko, indem man Ma√ünahmen ergreift, die dabei helfen, die Eintrittswahrscheinlichkeit oder die Auswirkung zu verringern.

Wie man erkennt, beschr√§nkt sich diese Form der Betrachtungsweise prim√§r auf die Vermeidung von Sch√§den. Was fehlt, ist die Betrachtung von Ma√ünahmen, die dabei helfen, eingetretene Sch√§den zu erkennen und diese zu beheben. Meiner Erfahrung nach f√ľhrt diese einseitige Herangehensweise dazu, dass sich Unternehmen darauf versteifen, Gefahren abzuwehren und sich zu wenig damit auseinander setzen, was passiert, wenn ein Schaden trotzdem eintreten sollte. Erst in j√ľngerer Zeit scheint es hier ein Umdenken zu geben. Nachdem sogar Sicherheitsunternehmen wie z.B. Antivirenhersteller und Sicherheitsbeh√∂rden gehackt werden, entsteht langsam das Mantra "es stellt sich nicht die Frage, ob man gehackt wird, sondern wann".

Die Klassifizierung von Sicherheitsmaßnahmen in die Kategorien Vermeidung, Erkennung und Behebung, wobei Mehrfachklassifizierungen je nach Nutzungsart durchaus möglich sind, kann dabei helfen, einen ganzheitlichen Ansatz des Sicherheitsmanagements zu etablieren.

Vermeiden

Meiner Erfahrung nach handelt es sich bei der Vermeidung von Sch√§den um den Ma√ünahmenkomplex, den Unternehmen als erstes betrachten, wenn sie sich mit dem Thema Sicherheits- oder Risikomanagement besch√§ftigen. Als ich begonnen habe, mich mit der IT-Security zu besch√§ftigen, ist man h√§ufig noch davon ausgegangen, dass es ausreichend ist, gen√ľgend technische Ma√ünahmen zu ergreifen, um sich vollst√§ndig abzusichern. Das Nutzen von Verschl√ľsselung, der Einsatz von Antivirenscannern und Firewalls, das Nutzen von RAIDs, das Erstellen von Backups, sowie das regelm√§√üige Patchen von Anwendungen waren die klassischen zu ergreifenden Ma√ünahmen. Wie bereits weiter oben beschrieben, f√ľhren auch die klassischen Risikomanagementmethoden dazu, sich prim√§r mit Ma√ünahmen zur Vermeidung von Sch√§den zu besch√§ftigen.

Erkennen

Die Erkennung von Sch√§den ist, so zumindest meine Erfahrung, selten vom Gedanken des Risikomanagements getrieben, daf√ľr aber umso h√§ufiger vom Gedanken der operativen Funktionst√ľchtigkeit. Die klassische Erkennungsmethode ist der Einsatz einer Monitoringl√∂sung im IT-Umfeld. Diese beschr√§nkt sich jedoch sehr h√§ufig auf Themen wie die Verf√ľgbarkeit von Services und der Ressourcenauslastung von Hardware.

Nur selten wird sie genutzt, um regelm√§√üig nach ver√§nderten Dateien, unzul√§ssigen Netzwerkverbindungen, unbekannten Prozessen oder bekannten Prozessen mit unbekannten Parameterlisten zu scannen. Hintergrund d√ľrfte auch sein, dass sich das Erkennen von Verf√ľgbarkeitsproblemen recht einfach standardisieren l√§sst, w√§hrend Verhaltensmuster von Systemen und Anwendungen sehr unterschiedlich sein k√∂nnen und eines individuelles Profiling bed√ľrfen. Das f√ľhrt dazu, dass solche komplexeren Erkennungen in Form von IDS- und IPS-L√∂sungen erst dann zum Einsatz kommen, wenn das gesamte Sicherheitsmanagement des Unternehmens einen gewissen Reifegrad erreicht hat.

Eine andere klassische Ma√ünahme ist die Einf√ľhrung einer zentralen Logging-Infrastruktur, die es prinzipiell erm√∂glicht, durch das Analysieren einer Vielzahl von Systemevents einen m√∂glichen Schaden zu erkennen. Aber auch hier m√ľssen erst im Zuge einer SIEM-L√∂sung entsprechende Trigger definiert werden, bevor ein tats√§chlicher Nutzen aus der Ma√ünahme gezogen werden kann.

Doch auch abseits der Technik sollte man sich im Zuge seines Risikomanagements immer die Frage stellen, welche Ma√ünahmen ergriffen werden, um das Eintreten eines Schadens √ľberhaupt erkennen zu k√∂nnen.

Beheben

Nachdem man sich Gedanken gemacht hat, wie man eingetretene Sch√§den erkennt, besteht die schlussendliche Arbeit darin, sich zu √ľberlegen, wie man den Schaden wieder beheben kann. Eine klassische Ma√ünahme zur Behebung von Sch√§den ist das Wiedereinspielen von Backups oder das Neuaufsetzen von kompromittierten Systemen. Doch evtl. muss je nach Anwendung granularer vorgegangen werden oder es ergibt sich die M√∂glichkeit, die Behebung einzelner Sch√§den automatisiert durchzuf√ľhren.

Meiner Erfahrung nach bestehen viele Ma√ünahmen zur Behebung von Sch√§den im ersten Schritt aus spontanen Ideen, die im tats√§chlichen Ernstfall umgesetzt werden. Diese verfestigen sich anschlie√üend durch die Formulierung von Wiederherstellungspl√§nen oder, gr√∂ber gefasst, durch Notfallpl√§ne im Zuge der Etablierung eines Notfallmanagements. Die erstellten Pl√§ne st√ľtzen sich anfangs oft vor allem auf h√§ndische Prozesse. Erst bei zunehmender Reife werden manuelle Arbeiten durch automatisierte Tasks abgel√∂st. Denn auch hier m√ľssen viele individuelle Aspekte ber√ľcksichtigt werden.

Auch abseits der Technik sollte man sich Gedanken √ľber die Behebung von Sch√§den machen. Wie reagiert man als Unternehmen, wenn essentielle Ressourcen nicht zur Verf√ľgung stehen oder wie w√ľrde man beim Abhandenkommen sensibler Daten vorgehen?

Praxisbeispiel

Um diese Dreiteilung an Ma√ünahmen einmal im Zusammenspiel zu zeigen, m√∂chte ich einmal Antivirensoftware als plakatives Beispiel w√§hlen, unabh√§ngig von der Diskussion √ľber deren tats√§chlichen Nutzen.

Wir beginnen mit der Hauptaufgabe von Antivirensoftware, dem Erkennen von Viren. Im ersten Schritt wird man nur einzelne Dateien gepr√ľft haben und diese Ma√ünahme St√ľck f√ľr St√ľck erweitert haben; um das Scannen ganzer Ordner, um das Scannen des ganzen Systems und schlussendlich auch um das Scannen des Bootsektors. Die Erkennung erfolgte fr√ľher allein auf Basis einer Virensignatur, sprich, der Information dar√ľber, wie der kompilierte Quelltext des Virus aussieht.

Als man Viren erkennen konnte, stellte sich die Frage, wie man mit ihnen umgeht, sprich, wie das Beheben eines Virenbefalls aussehen k√∂nnte. Man k√∂nnte sie z.B. in Quarant√§ne schieben, sie direkt l√∂schen oder, mit entsprechend viel Wissen √ľber den Virus, diesen evtl. sogar aus einer infizierten Datei entfernen.

Als man nun weiter fortschritt, Festplatten gr√∂√üer und die Scans damit zeitaufw√§ndiger wurden, kam man evtl. nicht mehr dazu, regelm√§√üig alle Dateien h√§ndisch zu pr√ľfen. Trotzdem war es wichtig, zu verhindern, dass eine virenverseuchte Datei ge√∂ffnet oder verteilt wird. Also begann man damit, Dateien noch vor einem tats√§chlichen Zugriff zu pr√ľfen. Das Live-Scannen von Dateien half, das Ausf√ľhren von Viren zu vermeiden, optimierte gleichzeitig aber auch das Erkennen von Viren.

Viren wurde intelligenter und polymorph, ihr Quelltext √§nderte sich regelm√§√üig, sodass einfache Virensignaturen nicht mehr in der Lage waren, diese zu erkennen. Erst durch die Einf√ľhrung von Heuristiken kam Antivirensoftware dazu, auch diese neuen, komplexeren Viren √ľberhaupt erkennen zu k√∂nnen.

Das Internet hielt Einzug und verdr√§ngte Wechseldatentr√§ger als prim√§res Einfallstor f√ľr Viren. Die Hersteller von Antivirensoftware gingen mit und begannen, sich in den Datenkanal einzuklinken. Man wollte Viren bereits erkennen k√∂nnen, noch bevor sie auf der Festplatte des Nutzers gespeichert werden. Das sollte auch vermeiden, dass Nutzer √ľberhaupt die M√∂glichkeit haben, irgendetwas mit der virenverseuchten Datei anstellen zu k√∂nnen.

Im Businessumfeld sind die Risiken, die von einem Virenbefall ausgehen, sogar noch gr√∂√üer, als in anderen Bereichen. Virenschreiber haben es vor allem auch auf diesen Bereich abgesehen, mitunter sogar mit speziell entwickelten Viren. Aber auch Antivirenhersteller haben dies erkannt und bieten spezifische L√∂sungen f√ľr solche Unternehmen an. Eine L√∂sung ist, Anwendungen vor ihrem Download auf einen Nutzerrechner in einer VM einer Scanning-Appliance gezielt zur Ausf√ľhrung zu bringen, um ein verd√§chtiges Verhalten dieser Anwendung bereits im Vorfeld erkennen zu k√∂nnen. Hiermit will man die Ausf√ľhrung eines Virus auf einem Nutzerrechner vermeiden.

Fazit

Wie man am Beispiel der Antivirensoftware gut erkennen kann, kann die Klassifizierung von Maßnahmen in die Themenfelder Vermeidung, Erkennung und Behebung dabei helfen, diese besser in einem gesamtheitlichen Sicherheitsansatz zu begreifen. Je nach Reifegrad des Sicherheitsmanagements im eigenen Unternehmen sollte man sich einmal ansehen, in welchem der drei Themenkomplexe noch Nachholbedarf bzgl. der geplanten und umgesetzten Maßnahmen besteht.


Search

Links

RSS feed

Categories

administration (43)
arduino (12)
calcpw (2)
code (37)
hardware (16)
java (2)
legacy (113)
linux (29)
publicity (7)
review (2)
security (62)
thoughts (22)
update (9)
windows (17)
wordpress (19)