In addition of being used for visualization, the highly parallel architecture of GPUs also make them a natural fit for accelerating data-parallel and throughput oriented computations such as machine learning or numerical simulations. When GPUs applications are deployed inside data centers they suffer from the same packaging issues as CPU applications, aggravated by a strong need to get reproducible performance results. The Docker ecosystem is mostly CPU-centric and aims to be hardware-agnostic. This is not the case for GPUs applications since specialized hardware and a specific kernel device driver are now required. We will show how we reconciled those seemingly opposed requirements to enable containerization and execution of GPU applications with Docker.