The promise of Machine Learning (ML) to solve data-driven problems at scale has created a growing interest in incorporating ML components into software systems. However, deploying ML models opens the door for additional security vulnerabilities, such as poisoning, privacy and adversarial attacks. A successful attack can have severe consequences, especially in safety-critical applications. In traditional software development there exist a plethora of security guidelines and principles. Their demonstrated effectiveness leads us to ask: How can we leverage these principles to develop secure and robust ML systems ? The challenge of this question is that unlike traditional software, ML is deployed in variable settings; thus, security of ML systems must be adaptable to environmental changes. This talk gives practitioners an overview of ML security landscape and introduces best practices to secure an ML system against potential attacks.