Dropbox: Use the API in 5 Simple Steps

07.01.2020 yahe code

A few weeks ago I wrote about the new cryptographic basis of the Shared-Secrets service. What I did not write about was that one user asked if a file-sharing option could be added. I declined because sharing files is nothing that the service is meant to be there for. But I tried whether sharing files would be possible.

To share the files I needed a place to store them, but I did not feel like storing them on the actual server. So I searched for an easy file storage service. The first service that came to my mind was Dropbox. I had never used Dropbox nor the Dropbox API but felt that it should not be all too difficult. But I did not know how easy it actually was to use the Dropbox API. Because of this simplicity I decided to write a short blog post about it. So here are 5 simple steps to use the Dropbox API...

1. Register your Account

First of all you need to register a Dropbox account. For this you should use a valid e-mail address that you have access to because you will have to verify that e-mail address later on.

Dropbox Account Registration

2. Verify your E-Mail Address

After logging in with your new account you can visit the app creation page where you are requested to verify your e-mail address. You have to proceed with the verification in order to create your own Dropbox App.

Dropbox E-Mail Address Verification

3. Create your Dropbox App

Now that you have verified your e-mail address you can visit the app creation page again to create the actual app. Select to use the Dropbox API, to store the files in a separate App folder, choose a name for your App and you are ready to go.

Dropbox App Creation

4. Retrieve Your Access Token

When you have created the app the app details will be shown to you. Scroll down a bit and you will find a button labeled "Generate" in the "OAuth 2" section that will generate your individual access token. The Access Token displayed below the "Generated access token" heading is needed to identify your Dropbox Account.

Dropbox App 1 Dropbox App 2

5. Use the Dropbox API

Using the Dropbox API itself is relatively simple as most actions can be done with a single REST API call. Here are some PHP examples that illustrate the usage of the Dropbox API. First of all we define the Access Token which is needed by all API calls:

  // see https://blogs.dropbox.com/developers/2014/05/generate-an-access-token-for-your-own-account/
  define("DROPBOX_ACCESS_TOKEN", "YOUR DROPBOX ACCESS TOKEN");

Now, in order to store a given string ($content) in Dropbox we can use the following function. On failure it returns NULL and on success it returns a random identifier through which the content can be retrieved again:

  function dropbox_upload($content) {
    $result = null;

    // store content in memory
    if ($handler = fopen("php://memory", "rw")) {
      try {
        if (strlen($content) === fwrite($handler, $content)) {
          if (rewind($handler)) {
            // get random filename
            $filename = bin2hex(openssl_random_pseudo_bytes(32, $strong_crypto));

            if ($curl = curl_init("https://content.dropboxapi.com/2/files/upload")) {
              try {
                $curl_headers = ["Authorization: Bearer ".DROPBOX_ACCESS_TOKEN,
                                 "Content-Type: application/octet-stream",
                                 "Dropbox-API-Arg: {\"path\":\"/$filename\"}"];

                curl_setopt($curl, CURLOPT_HTTPHEADER,     $curl_headers);
                curl_setopt($curl, CURLOPT_PUT,            true);
                curl_setopt($curl, CURLOPT_CUSTOMREQUEST,  "POST");
                curl_setopt($curl, CURLOPT_INFILE,         $handler);
                curl_setopt($curl, CURLOPT_INFILESIZE,     strlen($content));
                curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);

                $response = curl_exec($curl);
                if (200 === curl_getinfo($curl, CURLINFO_RESPONSE_CODE)) {
                  $result = $filename;
                }
              } finally {
                curl_close($curl);
              }
            }
          }
        }
      } finally {
        fclose($handler);
      }
    }

    return $result;
  }

To retrieve the stored content the following function can be used. It requires the random identifier ($filename) as the input parameter:

  function dropbox_download($filename) {
    $result = null;

    if ($curl = curl_init("https://content.dropboxapi.com/2/files/download")) {
      try {
        $curl_headers = ["Authorization: Bearer ".DROPBOX_ACCESS_TOKEN,
                         "Content-Type: application/octet-stream",
                         "Dropbox-API-Arg: {\"path\":\"/$filename\"}"];

        curl_setopt($curl, CURLOPT_HTTPHEADER,     $curl_headers);
        curl_setopt($curl, CURLOPT_POST,           true);
        curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);

        $response = curl_exec($curl);
        if (200 === curl_getinfo($curl, CURLINFO_RESPONSE_CODE)) {
          $result = $response;
        }
      } finally {
        curl_close($curl);
      }
    }

    return $result;
  }

In order to delete the stored content the following function can be used. It again requires the random identifier ($filename) as the input parameter:

  function dropbox_delete($filename) {
    $result = null;

    if ($curl = curl_init("https://api.dropboxapi.com/2/files/delete_v2")) {
      try {
        $curl_fields  = json_encode(["path"  => "/$filename"]);
        $curl_headers = ["Authorization: Bearer ".DROPBOX_ACCESS_TOKEN,
                         "Content-Type: application/json"];

        curl_setopt($curl, CURLOPT_HTTPHEADER,     $curl_headers);
        curl_setopt($curl, CURLOPT_POST,           true);
        curl_setopt($curl, CURLOPT_POSTFIELDS,     $curl_fields);
        curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);

        $response = curl_exec($curl);
        if (200 === curl_getinfo($curl, CURLINFO_RESPONSE_CODE)) {
          $result = $response;
        }
      } finally {
        curl_close($curl);
      }
    }

    return $result;
  }

And there we have it. With a few lines of code we were able to store, retrieve and delete contents in Dropbox. In my case I also encrypted the content before storing it in Dropbox, something that you should consider as well if you are handling confidential contents. But other than that the shown functions are most of what you will need in the beginning.


Shared-Secrets: Cryptography Reloaded

17.12.2019 yahe code linux security

About 3 years ago I wrote about a tool called Shared-Secrets that I had written. It had the purpose of sharing secrets through encrypted links which should only be retrievable once. Back then I made the decision to base the application on the GnuPG encryption but over the last couple of years I had to learn that this was not the best of all choices. Here are some of the problems that I have found in the meantime:

  • The application started by using the ASCII-armoring of GnuPG to get human-readable outputs for the URL generation. Unfortunately, the ASCII-armoring introduced many possibilities to alter links and thus retrieve secrets more that once.
  • To clean up the interface to GnuPG the application was rewritten to use the GnuPG PECL extension. Unfortunately, this introduced integrity problems and was removed again shortly afterwards.
  • In 2018 the world had to learn through EFail that the integrity protection of GnuPG is actually optional. Thus, the application had to be enhanced to prevent unprotected messages from being decrypted.
  • After this problem I started to poke around GnuPG and the OpenPGP standard and learned that the message format does not support integrity protection for the actual message structure. This means that message packets can be added, moved around or removed. All of these modifications made it possible to alter links and thus retrieve secrets more than once.

As this last issue is a problem with the GnuPG message format itself its solution required to either change or completely replace the cryptographic basis of Shared-Secrets. After thinking about the possible alternatives I decided to design simple message formats and completely rewrite the cryptographic foundation. This new version has been published a few weeks ago and a running instance is also available at secrets.syseleven.de.

This new implementation should solve the previous problems for good and will in future allow me to implement fundamental improvements when they become necessary as I now have a much deeper insight into the used cryptographic algorithms and the design of the message formats.


Nextcloud-Tools: Working with the Nextcloud Server-Side Encryption

02.12.2019 yahe administration code security

At the beginning of the year we ran into a strange problem with our server-side encrypted Nextcloud installation at work. Files got corrupted when they were moved between folders. We had found another problem with the Nextcloud Desktop client just recently and therefore thought that this was also related to synchronization problems within the Nextcloud Desktop client. Later in the year we bumped into this problem again, but this time it occured while using the web frontend of Nextcloud. Now we understood that the behaviour did not have anything to do with the client but with the server itself. Interestingly, another user had opened a Github issue about this problem at around the same time. As these corruptions lead to quite some workload for the restore I decided to dig deeper to find an adequate solution.

After I had found out how to reproduce the problem it was important for us to know whether corrupted files could still be decrypted at all. I wrote a decryption script and proved that corrupted files could in fact be decrypted when Nextcloud said that they were broken. With this in mind I tried to find out what happened during the encryption and what broke files while being moved. Doing all the research about the server-side encryption of Nextcloud, debugging the software, creating a potential bugfix and coming up with a temporary workaround took about a month of interrupted work.

Even more important than the actual bugfix (as we are currently living with the workaround) is the knowledge we gained about the server-side encryption. Based on this knowledge I developed a bunch of scripts that have been published as nextcloud-tools on GitHub. These scripts can help you to rescue your server-side encrypted files in cases when your database was corrupted or completely lost.

I also wrote an elaborate description about the inner workings of the server-side encryption and tried to get it added to the documentation. It took some time but in the end it worked! For about a week now you can find my description of the Nextcloud Server-Side Encryption Details in the official Nextcloud documentation.


twastosync: Synchronize Mastodon Toots to Twitter

18.11.2019 yahe code

Ever since I built up my own Mastodon instance and created my own account I wanted to be able to cross-post messages between the two platforms. I searched for different solutions like IFTTT but I never got them to work properly - until I found dlvr.it which worked right out of the box for me. Well, at least at the beginning...

I have to admit that I did not use Mastodon regularly for quite some time and even thought about deleting my instance, but I wanted to give it another try now and wanted to use the synchronization feature - which did not work anymore. So I checked my dlvr.it account and found out that the synchronization had silently broken when Mastodon switched from providing Atom feeds to providing RSS feeds. So I reconfigured the synchronization, just to find out that dlvr.it now has limited the free tier to three posts per day. Think about it: three. posts. per day.

Looking at the pricing of dlvr.it I just had to laugh. They want me to pay $8.29 per month for an unlimited amount of messages. In comparison, the server that my Mastodon instance is running on is cheaper than that. My next step was to look for alternatives: circleboom also allows only three posts from an RSS feed per day and for an unlimited number of posts they want to charge you $17.00 per month or $5.99 per month if you are on an annual subscription. socialoomph does not even provide RSS feed synchronization in the free tier and comes at $15.00 per month or $162.00 per year. Finally, I also looked at Hootsuite, which is the strangest of them all. In order to auto-publish posts from an RSS feed you actually have to subscribe to the paid RSS AutoPublisher app which wants to charge you $5.99 per month.

That was the point where I decided to build something on my own. Because, what is better than writing code? Writing code and saving money by doing so. I took one of my previous Twitter tools as a basis and created twastosync. The tool uses the TwitterOAuth library to communicate with the Twitter API and the simplexml_load_string() function of PHP to parse the RSS feed that you can find at the URL https://mastodon-instance-url/@username.rss. Finally, it uses my unchroot library to prevent concurrent executions.

When you installed and configured the script you can use something like CRON to call the twastosync script regularly. Thanks to the concurrency prevention you can even call it every minute. One step that might be a bit scary is to use the app registration page of Twitter to register your own bot. I did this myself and there did not pop up any problem.

There we are: With a small script we are able to synchronize our Mastodon toots to Twitter without relying on a third party platform and by doing it ourselves we are even saving real money. 🤑


.local, .home.arpa and .localzone.xyz

11.11.2019 yahe administration

35 years ago the IETF defined special IPv4 addresses in RFC 1597 to be used solely in private intranets and with that created the separation between internal networks and the internet. 5 years later they proceeded to define special-use top-level domains like .example, .invalid and .test in RFC 2606. However, the IETF did not reserve a top-level domain to be used within the private intranets that they had introduced before. This lead to confusion with administrators that still persists to the current day.

Every time an administrator was tasked to design a network structure they also had to think about naming conventions within the network. As there was a pre-defined set of "valid" top-level domains within the internet, it was rather easy to just select an "invalid" TLD and use that for the private network. At least, until the ICANN decided to delegate new TLDs to anyone who was willing to apply and pay the required fees. The ICANN even provided a paper on how to identify and mitigate name collisions to be used by professionals.

With RFC 6762 the .local TLD was approved as a special-use TLD to be used in internal networks and many administrators thought that this would solve the problems, but unfortunately, it did not. .local was designated to be used with multicast DNS, meaning that each device in a network could grab its preferred hostname. At the beginning this was not a big deal, but as more and more devices implemented Apple's Bonjour protocol (also known as zeroconf) interoperability problems started to pop up. Appendix G of said RFC even mentioned alternative TLDs that might be used, however, the ICANN did not accept any of them as special-use domain names.

Years later the new RFC 7788 defined .home as a potential local-only TLD, but it was changed to the rather unusual .home.arpa domain name with RFC 8375. This domain is safe to be used in local networks and was accepted as a special-use domain name by the ICANN. Unfortunately, due the word "home" in the domain it is not quite fitting for business environments.

There are two more TLDs that might be safe to use: .corp and .home are not officially recognized special-use domain names but the ICANN has refrained from delegating them to bidders as the risk of breaking internal networks is deemed to be too big.

As of today, most tutorials still propose to use a publicly registered domain for internal networks. This is why I registered .localzone.xyz about half a year ago. It is explicitly meant to be used locally, has public DNS records set to prevent any public CA from issuing trusted TLS certificates for that domain as well as DNS records defining all mails using that domain as SPAM and will not be used for publicly accessible services as long as it is owned by me. I am using this domain internally myself and am also trying to get the domain added to the public suffix list. This list defines domain and cookie boundaries so that e.g. example-a.com is not able to set cookies for example-b.com.

So if you are looking for a domain that you can use for internal domain names then look no further: Either use the special-use domain name .home.arpa (which might not be fitting for businesses) or use .localzone.xyz as I am. It makes clear that you are in a local environment without being too specific about whether it is a private or business network.


Search

Links

RSS feed

Categories

administration (42)
arduino (12)
calcpw (2)
code (37)
hardware (16)
java (2)
legacy (113)
linux (28)
publicity (6)
review (2)
security (60)
thoughts (21)
windows (17)
wordpress (19)