Tag: announcement

New compute drivers and deprecated drivers in 1.0

With Libcloud 1.0.0 around the corner, it’s time to have a spring clean of the compute drivers. Granted, it’s not spring everywhere -actually I’m writing from Sydney, Australia where it’s definitely summer.

Looking at the 52 providers in the 0.21.0 release, I have identified 5 providers that are no longer available or open.

Handling deprecated drivers

For 1.0.0, we need a clean and user-friendly way of handling deprecated drivers as well as keeping the repository clean from legacy code.

The most obvious implementation is that calls to get_driver(Provider.NINEFOLD) as an example will return a user error message saying this provider is no longer supported with a link to a new article and an alternative solution.

Currently, users trying to instantiate a HPE public cloud driver for example will get a connection error, which is not user friendly.

New compute drivers in 1.0.0-pre2

The upcoming release, so currently available in trunk contains some new compute drivers.

Full change log can be found at here.

Libcloud 1.0-pre1 open for feedback

We are pleased to announce that version 1.0-pre1 vote thread is open and the release is ready for community feedback.

1.0-pre1 marks the first pre-release of the 1.0 major release. Some years ago, Tomaz Muraus spoke on the podcast FLOSS weekly. Tomaz spoke about how much of a huge challenge porting the project to Python 3.x would be(!) as well as the 1.0 milestone.

It is worth listening to the podcast to see how far things have come, we now average 2 pull-requests a day and have 156 contributors.

As the project has matured over the last 5 years one of the most remarkable changes has been the adoption from the community and continued support from our contributors adding new drivers, patching strange API issues and keeping the project alive.

Anthony Shaw will be speaking on the FLOSS weekly podcast on February 2nd and discussing our community and the project, so please tune in.

The Cloud market as I’m sure you’re all aware of is thriving, the purpose of Libcloud was originally:

  • To help prevent lock-in to a particular vendor
  • To abstract the complexity of vendor APIs
  • To give a simple way for deploying to and managing multiple cloud vendors

Since that we have had (at the last count) 2,118,539 downloads. The project continues to grow in popularity with each new release.

So with the 1.0 major release we would like to announce 2 new driver types, containers and backup.

History of our drivers

The compute (IaaS) API is what libcloud is best known for but there is a range of drivers available for many other capabilities.

There is a presentation on the value of using Libcloud to avoid lock in on SlideShare

This is a history of the different driver types in the libcloud project.

  • Compute (v0.1.0)
  • Support for nodes, node images, locations, states
  • 52 providers including every major cloud provider in the market. Plus local services like Vmware, OpenStack, libvirt
  • DNS (v0.6.0)
  • Support for zones, records, recordtypes
  • 19 providers including CloudFlare, DigitalOcean, DNSimple, GoDaddy, Google DNS, Linode, Rackspace, Amazon R53, Zerigo
  • Object Storage (v0.5.0)
  • Support for containers and objects
  • 11 providers including Amazon S3, Azure Blobs, Google storage, CloudFiles, OpenStack Swift
  • Load Balancer (v0.5.0)
  • Support for nodes, balancers, listeners and algorithms
  • 11 providers including CloudStack, Dimension Data, Amazon ELB, Google GCE LB, SoftLayer LB
  • Backup (v0.20.0)
  • Support for backup targets, recovery points and jobs
  • 3 providers, Dimension Data, Amazon EBS snaps, Google snaps

Introducing Backup Drivers

With 1.0-pre1 we have introduced a new driver type for backup, libcloud.backup

Backup API allows you to manage Backup as A Service and services such as EBS Snaps, GCE volume snap and dimension data backup.


  • libcloud.backup.base.BackupTarget - Represents a backup target, like a Virtual Machine, a folder or a database.
  • libcloud.backup.base.BackupTargetRecoveryPoint - Represents a copy of the data in the target, a recovery point can be recovered to a backup target. An inplace restore is where you recover to the same target and an out-of-place restore is where you recover to another target.
  • libcloud.backup.base.BackupTargetJob - Represents a backup job running on backup target.

Introducing Container-as-a-Service Drivers

The API is for Container-as-a-Service providers, these new types of cloud services offer container management and hosting as a service. The new services are already providing proprietary APIs, giving the need for a tool like Libcloud if you want to provision to any cloud provider.

Google, Amazon and Joyent have all announced Container cloud services and Microsoft have launched a beta service also, so we are getting on the front foot with an abstraction API for people wishing to gain similar benefits to the compute, load balancer and storage APIs.

A presentation on this topic is available on SlideShare.

Isn’t docker a standard? Well, yes and no.

Docker has been the main technology adopted by these providers as the host system for the containers and also as the specification of the containers themselves. But, Docker is not a provisioning system, it is a virtualization host. Also there are alternatives, like CoreOS Rkt.

Container API design

Container-as-a-Service providers will implement the ContainerDriver class to provide functionality for :

  • Listing deployed containers
  • Starting, stopping and restarting containers (where supported)
  • Destroying containers
  • Creating/deploying containers
  • Listing container images
  • Installing container images (pulling an image from a local copy or remote repository)

Simple Container Support

  • libcloud.container.base.ContainerImage - Represents an image that can be deployed, like an application or an operating system
  • libcloud.container.base.Container - Represents a deployed container image running on a container host

Cluster Suppport

Cluster support extends on the basic driver functions, but where drivers implement the class-level attribute supports_clusters as True clusters may be listed, created and destroyed. When containers are deployed, the target cluster can be specified.

  • libcloud.container.base.ContainerCluster - Represents a deployed container image running on a container host
  • libcloud.container.base.ClusterLocation - Represents a location for clusters to be deployed

Using the container drivers

The container drivers have been designed around similar principles to the compute driver. It is simple to use and a flat class design.

from libcloud.container.providers import get_driver
from libcloud.container.types import Provider

Cls = get_driver(Provider.DOCKER)
driver = Cls('user', 'api key')

image = driver.install_image('tomcat:8.0')
container = driver.deploy_container('tomcat', image)


Container Registries

The Docker Registry API is used by services like Amazon ECR, the Docker Hub website and by anyone hosting their own Docker registry. It doesn’t belong to a particular driver, so is a utility class. Some providers, like Amazon ECR have a factory method to provide a registry client Images from docker registry can be sent to the deploy_container method for any driver.

from libcloud.container.utils.docker import HubClient 
hub = HubClient() 
image = hub.get_image('ubuntu', 'latest') 

When other container registry services are made available these can be provided in a similar context.

Prototype drivers in libcloud.container

Drivers have been provided to show example implementations of the API, these drivers are experimental and need to go through more thorough community testing before they are ready for a stable release.

The driver with the most contentious implementation is Kubernetes. We would like users of the Amazon ECS, Google Containers and Kubernetes project to provide feedback on how they would like to map clusters, pods, namespaces to the low level concepts in the driver.

Providing feedback

The voting thread is open, please use this as your opportunity to give feedback.


Thanks to everyone who contributed and made this release possible! Full list of people who contributed to this release can be found in the CHANGES file.

Notice for Linode users

This is an announcement for users of the Linode driver for Libcloud who might have started experiencing issues recently.


A couple of Libcloud users have reported that they have recently started experiencing issues when talking to the Linode API using Libcloud. They have received messages similar to the one shown below.

socket.error: [Errno 104] Connection reset by peer

It turns out that the issue is related to the used SSL / TLS version. For compatibility and security reasons (Libcloud also supports older Python versions), Libcloud uses TLS v1.0 by default.

Linode recently dropped support for TLS v1.0 and it now only support TLS >= v1.1. This means Libcloud won’t work out of the box anymore.


If you are experiencing this issue, you should update your code to use TLS v1.2 or TLS v1.1 as shown below.

import ssl

import libcloud.security
libcloud.security.SSL_VERSION = ssl.PROTOCOL_TLSv1_1
# or even better if your system and Python version supports TLS v1.2
libcloud.security.SSL_VERSION = ssl.PROTOCOL_TLSv1_2

# Instantiate and work with the Linode driver here...

Keep in mind that for this to work you need to have a recent version of OpenSSL installed on your system and you need to use Python >= 3.4 or Python 2.7.9.

For more details please see recently updated documentation. If you are still experiencing issues or have any questions, please feel free to reach us via the mailing list or IRC.

Note: Even if you are not experiencing any issues, it’s generally a good idea to use the highest version of TLS supported by your system and the provider you use.

Quick note on ssl.PROTOCOL_SSLv23

Python uses ssl.PROTOCOL_SSLv23 constant by default. When this constant is used, it will let client known to pick the highest protocol version which both the client and server support (it will be selecting between SSL v3.0, TLS v1.0, TLS v1.1 and TLS v1.2).

We use ssl.PROTOCOL_TLSv1 instead of ssl.PROTOCOL_SSLv23 for security and compatibility reasons. SSL v3.0 is considered broken and unsafe and using ssl.PROTOCOL_SSLv23 can result in an increased risk for a downgrade attack.


Special thanks to Jacob Riley, Steve V, Heath Naylor and everyone from LIBCLOUD-791 who helped debug and track down the root cause of this issue.