Experimental support for the requests package


I’ve just pushed a branch of the latest version of libcloud using the popular requests package by Kenneth Reitz instead of our home-rolled HTTP client library.

This article is for both users and developers of libcloud. If you want to give feedback, please join the developer mailing list.


  • requests is the defacto standard - it would be in the standard library but agreed against to allow it to develop faster
  • it works with python 2.6->3.5
  • Our SSL experience has a lot to be desired for Windows users, having to download the CA cert package and setting environment variables just to get SSL working
  • Developers can use requests_mock for deeper integration testing
  • less code to maintain
  • the role of libcloud is for cloud abstraction, we provide no value in writing and maintaining our own HTTP client library

Benefits of requests

There are a number of benefits to having a requests package

  • The client library code is smaller, leaner and simpler.
  • Requests has built in decompression support, we no longer need to support this
  • Requests has built in RAW download, upload support, helping with our storage drivers

Implications of the change

  • There are no longer 2 classes (LibcloudHTTPSConnection and LibcloudHTTPConnection) to be provided to each driver, they are now 1 class - LibcloudConnection. You probably won’t notice this because it is a property of the Connection class, but if you are developing or extending functionality then it is implicated.
  • Unit tests will look slightly different (see below)
  • This change broke 4200 unit tests (out of 6340)! I’ve since fixed them all since they were coupled to the original implementation, but now I don’t know if all of tests are valid.

Testing with requests

Unit tests that were written like this:

class DigitalOceanTests(LibcloudTestCase):
      def setUp(self):
          DigitalOceanBaseDriver.connectionCls.conn_classes = \	
           (None, DigitalOceanMockHttp)
          DigitalOceanMockHttp.type = None
          self.driver = DigitalOceanBaseDriver(*DIGITALOCEAN_v1_PARAMS)

Because of the change have been modified to (I updated all of them - so this is just for future reference)

class DigitalOceanTests(LibcloudTestCase):
      def setUp(self):
          DigitalOceanBaseDriver.connectionCls.conn_class = DigitalOceanMockHttp
          DigitalOceanMockHttp.type = None
          self.driver = DigitalOceanBaseDriver(*DIGITALOCEAN_v1_PARAMS)

Check it out!

The package is on my personal apache site, you can download it and install it in a virtualenv for testing.

pip install -e

The hashes are my apache space

Have a look at the PR and the change set for a list of changes

What might break?

What I’m really looking for is for users of Libcloud to take 15 minutes, an existing (working) libcloud script, install this package in a virtualenv and just validate that there are no regression bugs with this change.

I’m particularly sceptical about the storage drivers.

Once we have enough community feedback, we will propose a vote to merge this into trunk for future release.


Credit to dz0ny on IRC for contributing some of the requests patch.

New compute drivers and deprecated drivers in 1.0

With Libcloud 1.0.0 around the corner, it’s time to have a spring clean of the compute drivers. Granted, it’s not spring everywhere -actually I’m writing from Sydney, Australia where it’s definitely summer.

Looking at the 52 providers in the 0.21.0 release, I have identified 5 providers that are no longer available or open.

Handling deprecated drivers

For 1.0.0, we need a clean and user-friendly way of handling deprecated drivers as well as keeping the repository clean from legacy code.

The most obvious implementation is that calls to get_driver(Provider.NINEFOLD) as an example will return a user error message saying this provider is no longer supported with a link to a new article and an alternative solution.

Currently, users trying to instantiate a HPE public cloud driver for example will get a connection error, which is not user friendly.

New compute drivers in 1.0.0-pre2

The upcoming release, so currently available in trunk contains some new compute drivers.

Full change log can be found at here.

Using the container abstraction API in 1.0.0-pre1


Containers are the talk of the town, you can’t escape an event or meetup without someone talking about containers. The lessons we learnt with compute abstraction are applying widely with containers in 2016. APIs are not consistent between clouds, designs are not standardised and yet, users are trying to consume multiple services.

We introduced Container-as-a-Service support in 1.0.0-pre1, a community pre-release with the intention of sparking feedback from the open-source community about the design and the implementation of 4 example drivers :

  • Docker
  • Joyent Triton
  • Amazon EC2 Container Service
  • Google Kubernetes

In this tutorial we’re going to explore how to do this:

Deploying containers across platforms.

Pulling images from the Docker hub, deploying to Docker, Kubernetes and Amazon ECS then auditing them with a single query.

Getting Started with 1.0.0-pre1

First off, let’s install the new packages, you probably want to do this within a virtualenv if you’re using Apache Libcloud for other projects.

So run these commands at a Linux Shell to create a virtualenv called ‘containers’ and install the pre-release packages into that environment.

   virtualenv containers
   cd containers
   source bin/activate
   pip install apache-libcloud==1.0.0-pre1

Now you can start using this package with a test script, let’s create one called


Using your favourite text editor, update that file to import the 1.0.0-pre1 libraries and the factory methods for instantiating containers.

   from libcloud.container.providers import get_driver
   from libcloud.container.types import Provider

get_driver is a factory method as with all libcloud APIs, you call this method with the Provider that you want to instantiate. Our options are:

  • Provider.DOCKER - Standalone Docker API
  • Provider.KUBERNETES - Kubernetes Cluster endpoint
  • Provider.JOYENT - Joyent Triton Public API
  • Provider.ECS - Amazon EC2 Container Service

Calling get_driver will return a reference to the driver class that you requested. You can then instantiate that class into an object using the contructor. This is always a set of parameters for setting the host or region, the authentication and any other options.

   driver = get_driver(Provider.DOCKER)

Now we can call our driver and get an instance of it called docker_driver and use that to deploy a container. For Docker you need the pem files on the server, the host (IP or FQDN) and the port.

   docker_driver = driver(host='', port=4243,
             key_file='key.pem', cert_file='cert.pem')

Docker requires that images are available in the image database before they can be deployed as containers. With Kubernetes and Amazon ECS this step is not required as when you deploy a container it carries out that download for you.

   image = driver.install_image('tomcat:8.0')

Now that Docker has the version 8.0 image of Apache Tomcat, you can deploy this as a container called my_tomcat_container. Tomcat runs on TCP/8080 by default so we want to bind that port for our container using an optional parameter port_bindings

   bindings = { "22/tcp": [{ "HostPort": "11022" }] }
   container = driver.deploy_container('my_tomcat_container', image, port_bindings=bindings)

This will have deployed the container and started it up for you, you can disable the automatic startup by using start=False as a keyword argument. You can now call upon this container and run methods, restart, start, stop and destroy.

For example, to blow away that test container:


Crossing the streams; calling Kubernetes and Amazon EC2 Container Service

With Docker we saw that we needed to “pull” the image before we deployed it. Kubernetes and Amazon ECS don’t have that requirement, but as a safeguard you can query the Docker Hub API using a utility class provided

   from libcloud.container.utils.docker import HubClient
   hub = HubClient()
   image = hub.get_image('tomcat', '8.0')

Now image can be used to deploy to any driver instance that you create. Let’s try that against Kubernetes and ECS.

Amazon ECS

Before you run this example, you will need an API key and the permissions for that key to have the AmazonEC2ContainerServiceFullAccess role. ap-southeast-2 is my nearest region, but you can swap this out for any of the Amazon public regions that have the ECS service available.

   e_cls = get_driver(Provider.ECS)
   ecs = e_cls(access_id='SDHFISJDIFJSIDFJ',

ECS and Kubernetes both support some form of grouping or clustering for your containers. This is available as create_cluster, list_cluster.

   cluster = ecs.create_cluster('default')
   container = ecs.deploy_container(
            ex_container_port=8080, ex_host_port=8080)

This will have deployed a task definition in Amazon ECS with a single container inside, with a cluster called ‘main’ and deployed the tomcat:8.0 image from the Docker hub to that region.

Check out the ECS Documentation for more details.


Kubernetes authentication is currently only implemented for None (off) and Basic HTTP authentication. Let’s use the basic HTTP authentication method to connect.

k_cls = get_driver(Provider.KUBERNETES)

kubernetes = k_cls(key='my_username',
cluster2 = kubernetes.create_cluster('default')
container2 = kubernetes.deploy_container(

Wrapping it up

Now, let’s wrap that all up by doing a list comprehension across the 3 drivers to get a list of all containers and print their ID’s and Names. Then delete them.

containers = [conn.list_containers() for conn in [docker, ecs, kubernetes]]
for container in containers:
    print("%s : %s" % (,

About the Author

Anthony Shaw is on the PMC for Apache Libcloud, you can follow Anthony on Twitter at @anthonypjshaw.

Libcloud 1.0.0-pre1 released

We are pleased to announce the release of Libcloud 1.0.0-pre1.

This is a first pre-release in the 1.0.0 series which means it brings many new features, improvements, bug-fixes, and DNS drivers.

Release highlights

A full blog post on the new features in 1.0.0 can be found here

This includes:

Full change log can be found at here.


The release can can be downloaded from or installed using pip:

pip install apache-libcloud==1.0.0-pre1


If you have installed Libcloud using pip you can also use it to upgrade it:

pip install --upgrade apache-libcloud==1.0.0-pre1

Upgrade notes

A page which describes backward incompatible or semi-incompatible changes and how to preserve the old behavior when this is possible can be found at


Regular and API documentation is available at

Bugs / Issues

If you find any bug or issue, please report it on our issue tracker Don’t forget to attach an example and / or test which reproduces your problem.


Thanks to everyone who contributed and made this release possible! Full list of people who contributed to this release can be found in the CHANGES file.

Libcloud 1.0-pre1 open for feedback

We are pleased to announce that version 1.0-pre1 vote thread is open and the release is ready for community feedback.

1.0-pre1 marks the first pre-release of the 1.0 major release. Some years ago, Tomaz Muraus spoke on the podcast FLOSS weekly. Tomaz spoke about how much of a huge challenge porting the project to Python 3.x would be(!) as well as the 1.0 milestone.

It is worth listening to the podcast to see how far things have come, we now average 2 pull-requests a day and have 156 contributors.

As the project has matured over the last 5 years one of the most remarkable changes has been the adoption from the community and continued support from our contributors adding new drivers, patching strange API issues and keeping the project alive.

Anthony Shaw will be speaking on the FLOSS weekly podcast on February 2nd and discussing our community and the project, so please tune in.

The Cloud market as I’m sure you’re all aware of is thriving, the purpose of Libcloud was originally:

  • To help prevent lock-in to a particular vendor
  • To abstract the complexity of vendor APIs
  • To give a simple way for deploying to and managing multiple cloud vendors

Since that we have had (at the last count) 2,118,539 downloads. The project continues to grow in popularity with each new release.

So with the 1.0 major release we would like to announce 2 new driver types, containers and backup.

History of our drivers

The compute (IaaS) API is what libcloud is best known for but there is a range of drivers available for many other capabilities.

There is a presentation on the value of using Libcloud to avoid lock in on SlideShare

This is a history of the different driver types in the libcloud project.

  • Compute (v0.1.0)
  • Support for nodes, node images, locations, states
  • 52 providers including every major cloud provider in the market. Plus local services like Vmware, OpenStack, libvirt
  • DNS (v0.6.0)
  • Support for zones, records, recordtypes
  • 19 providers including CloudFlare, DigitalOcean, DNSimple, GoDaddy, Google DNS, Linode, Rackspace, Amazon R53, Zerigo
  • Object Storage (v0.5.0)
  • Support for containers and objects
  • 11 providers including Amazon S3, Azure Blobs, Google storage, CloudFiles, OpenStack Swift
  • Load Balancer (v0.5.0)
  • Support for nodes, balancers, listeners and algorithms
  • 11 providers including CloudStack, Dimension Data, Amazon ELB, Google GCE LB, SoftLayer LB
  • Backup (v0.20.0)
  • Support for backup targets, recovery points and jobs
  • 3 providers, Dimension Data, Amazon EBS snaps, Google snaps

Introducing Backup Drivers

With 1.0-pre1 we have introduced a new driver type for backup, libcloud.backup

Backup API allows you to manage Backup as A Service and services such as EBS Snaps, GCE volume snap and dimension data backup.


  • libcloud.backup.base.BackupTarget - Represents a backup target, like a Virtual Machine, a folder or a database.
  • libcloud.backup.base.BackupTargetRecoveryPoint - Represents a copy of the data in the target, a recovery point can be recovered to a backup target. An inplace restore is where you recover to the same target and an out-of-place restore is where you recover to another target.
  • libcloud.backup.base.BackupTargetJob - Represents a backup job running on backup target.

Introducing Container-as-a-Service Drivers

The API is for Container-as-a-Service providers, these new types of cloud services offer container management and hosting as a service. The new services are already providing proprietary APIs, giving the need for a tool like Libcloud if you want to provision to any cloud provider.

Google, Amazon and Joyent have all announced Container cloud services and Microsoft have launched a beta service also, so we are getting on the front foot with an abstraction API for people wishing to gain similar benefits to the compute, load balancer and storage APIs.

A presentation on this topic is available on SlideShare.

Isn’t docker a standard? Well, yes and no.

Docker has been the main technology adopted by these providers as the host system for the containers and also as the specification of the containers themselves. But, Docker is not a provisioning system, it is a virtualization host. Also there are alternatives, like CoreOS Rkt.

Container API design

Container-as-a-Service providers will implement the ContainerDriver class to provide functionality for :

  • Listing deployed containers
  • Starting, stopping and restarting containers (where supported)
  • Destroying containers
  • Creating/deploying containers
  • Listing container images
  • Installing container images (pulling an image from a local copy or remote repository)

Simple Container Support

  • libcloud.container.base.ContainerImage - Represents an image that can be deployed, like an application or an operating system
  • libcloud.container.base.Container - Represents a deployed container image running on a container host

Cluster Suppport

Cluster support extends on the basic driver functions, but where drivers implement the class-level attribute supports_clusters as True clusters may be listed, created and destroyed. When containers are deployed, the target cluster can be specified.

  • libcloud.container.base.ContainerCluster - Represents a deployed container image running on a container host
  • libcloud.container.base.ClusterLocation - Represents a location for clusters to be deployed

Using the container drivers

The container drivers have been designed around similar principles to the compute driver. It is simple to use and a flat class design.

from libcloud.container.providers import get_driver
from libcloud.container.types import Provider

Cls = get_driver(Provider.DOCKER)
driver = Cls('user', 'api key')

image = driver.install_image('tomcat:8.0')
container = driver.deploy_container('tomcat', image)


Container Registries

The Docker Registry API is used by services like Amazon ECR, the Docker Hub website and by anyone hosting their own Docker registry. It doesn’t belong to a particular driver, so is a utility class. Some providers, like Amazon ECR have a factory method to provide a registry client Images from docker registry can be sent to the deploy_container method for any driver.

from libcloud.container.utils.docker import HubClient 
hub = HubClient() 
image = hub.get_image('ubuntu', 'latest') 

When other container registry services are made available these can be provided in a similar context.

Prototype drivers in libcloud.container

Drivers have been provided to show example implementations of the API, these drivers are experimental and need to go through more thorough community testing before they are ready for a stable release.

The driver with the most contentious implementation is Kubernetes. We would like users of the Amazon ECS, Google Containers and Kubernetes project to provide feedback on how they would like to map clusters, pods, namespaces to the low level concepts in the driver.

Providing feedback

The voting thread is open, please use this as your opportunity to give feedback.


Thanks to everyone who contributed and made this release possible! Full list of people who contributed to this release can be found in the CHANGES file.