Skip to content

AWS Certified Developer - Associate (DVA-C01)

Introduction

Introduction The AWS Certified Developer – Associate (DVA-C01) exam is intended for individuals who perform a developer role. The exam validates a candidate’s ability to do the following: - Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices - Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS - https://aws.amazon.com/certification/certified-developer-associate/

Recommended AWS knowledge

Recommended AWS knowledge The target candidate should be able to do the following: - Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications - Identify key features of AWS services - Understand the AWS shared responsibility model - Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS - Use and interact with AWS services - Apply basic understanding of cloud-native applications to write code - Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code) - Author, maintain, and debug code modules on AWS

Domains
  • 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
    • Commit code to a repository and invoke build, test and/or deployment actions
    • Use labels and branches for version and release management
    • Use AWS CodePipeline to orchestrate workflows against different environments
    • Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS CodeDeploy for CI/CD purposes
    • Perform a roll back plan based on application deployment policy
  • 1.2 Deploy applications using AWS Elastic Beanstalk.
    • Utilize existing supported environments to define a new application stack
    • Package the application
    • Introduce a new application version into the Elastic Beanstalk environment
    • Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
    • Validate application health using Elastic Beanstalk dashboard
    • Use Amazon CloudWatch Logs to instrument application logging
  • 1.3 Prepare the application deployment package to be deployed to AWS.
    • Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
    • Outline the package/container directory structure and organize files appropriately
    • Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores) Version 2.1 DVA-C01 4 | PAG E
  • 1.4 Deploy serverless applications.
    • Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
    • Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)
  • 2.1 Make authenticated calls to AWS services.
    • Communicate required policy based on least privileges required by application.
    • Assume an IAM role to access a service
    • Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)
  • 2.2 Implement encryption using AWS services.
    • Encrypt data at rest (client side; server side; envelope encryption) using AWS services
    • Encrypt data in transit
  • 2.3 Implement application authentication and authorization.
    • Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
    • Use Amazon Cognito-provided credentials to write code that access AWS services.
    • Use Amazon Cognito sync to synchronize user profiles and data
    • Use developer-authenticated identities to interact between end user devices, backend authentication, and Amazon Cognito
  • 3.1 Write code for serverless applications.
    • Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
    • Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
    • Create an API endpoint using Amazon API Gateway
    • Create and test appropriate API actions like GET, POST using the API endpoint
    • Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
    • Compute read/write capacity units for Amazon DynamoDB based on application requirements
    • Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
    • Invoke an AWS Lambda function synchronously and asynchronously
  • 3.2 Translate functional requirements into application design.
    • Determine real-time vs. batch processing for a given use case
    • Determine use of synchronous vs. asynchronous for a given use case
    • Determine use of event vs. schedule/poll for a given use case
    • Account for tradeoffs for consistency models in an application design Version 2.1 DVA-C01 5 | PAG E
  • 3.3 Implement application design into application code.
    • Write code to utilize messaging services (e.g., SQS, SNS)
    • Use Amazon ElastiCache to create a database cache
    • Use Amazon DynamoDB to index objects in Amazon S3
    • Write a stateless AWS Lambda function
    • Write a web application with stateless web servers (Externalize state)
  • 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
    • Choose the appropriate APIs, software development kits (SDKs), and CLI commands for the code components
    • Write resilient code that deals with failures or exceptions (i.e., retries with exponential back off and jitter)
  • 4.1 Optimize applications to best use AWS services and features.
    • Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
    • Apply an Amazon S3 naming scheme for optimal read performance
  • 4.2 Migrate existing application code to run on AWS.
    • Isolate dependencies
    • Run the application as one or more stateless processes
    • Develop in order to enable horizontal scalability
    • Externalize state
  • 5.1 Write code that can be monitored.
    • Create custom Amazon CloudWatch metrics
    • Perform logging in a manner available to systems operators
    • Instrument application source code to enable tracing in AWS X-Ray
  • 5.2 Perform root cause analysis on faults found in testing or production.
    • Interpret the outputs from the logging mechanism in AWS to identify errors in logs
    • Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
    • Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

SCOPE

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance: - Analytics - Application Integration - Containers - Cost and Capacity Management - Data Movement - Developer Tools - Instances (virtual machines) - Management and Governance - Networking and Content Delivery - Security - Serverless

AWS services and features

Analytics

  • Amazon Elasticsearch Service (Amazon ES)
  • Amazon Kinesis

Application Integration

  • Amazon EventBridge (Amazon CloudWatch Events)
  • Amazon Simple Notification Service (Amazon SNS)
  • Amazon Simple Queue Service (Amazon SQS)
  • AWS Step Functions

Compute

Containers

  • Amazon Elastic Container Registry (Amazon ECR)
  • Amazon Elastic Container Service (Amazon ECS)
  • Amazon Elastic Kubernetes Services (Amazon EKS)

Database

  • Amazon DynamoDB
  • Amazon ElastiCache
  • Amazon RDS

Developer Tools

  • AWS CodeArtifact
  • AWS CodeBuild
  • AWS CodeCommit
  • AWS CodeDeploy
  • Amazon CodeGuru
  • AWS CodePipeline
  • AWS CodeStar
  • AWS Fault Injection Simulator
  • AWS X-Ray

Management and Governance

  • AWS CloudFormation
  • Amazon CloudWatch Networking and Content Delivery:
  • Amazon API Gateway
  • Amazon CloudFront
  • Elastic Load Balancing

Security, Identity, and Compliance

  • Amazon Cognito
  • AWS Identity and Access Management (IAM)
  • AWS Key Management Service (AWS KMS)

Storage

  • Amazon S3

S3 - Simple Storage Service

S3 - Description

  • Amazon S3 is a simple key-based object store
  • How much data can I store in Amazon S3?
  • The total volume of data and number of objects you can store are unlimited.
  • Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5TB terabytes.
  • The largest object that can be uploaded in a single PUT is 5GB gigabytes.
  • For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
  • S3 buckets are region specific
  • S3 bucket name are globally unique, like dns name
  • Objects are directly accessible via URL
  • You can store virtually any kind of data in any format

S3 - Summary

  • Buckets vs Objects - global unique name, tied to a region
  • S3 security - IAM policy, S3 Bucket Policy (public access), S3 Encryption
  • S3 Websites - host a static website on Amazon S3
  • S3 Versioning - multiple versions for files, prevent accidental deletes
  • S3 Access Logs - log requests made within your S3 bucket
  • S3 Replication - same-region or cross-region, must enable versioning
  • S3 Storage Classes - Standard, IA, IZ-IA, Intelligent, Glacier, Deep Archive
  • S3 Lifecycle Rules - transition objects between classes
  • Snowball / Snowmobile - import data onto S3 through a physical device
  • Storage Gateway - hybrid solution to extend on-premises storage to S3

S3 - Use cases

  • Backup and storage
  • Disaster Recovery
  • Archive
  • Hybrid Cloud storage
  • Application hosting
  • Media hosting
  • Data lakes & big data analytics
  • Software delivery
  • Static website

EC2 - Elastic Compute Cloud

EC2 - Description

  • highly configurable server eg. CPU, Memory, Network, OS

EC2 - Pricing Model

  • On-Demand Instances (Least Commitment)
  • low cost and flexible
  • only pay per hour, or by the minute - varies based on EC2 Instance types
  • Use case: short-term, spiky, unpredictable workloads, first time app
  • Ideal when your workload cannot be interrupted
  • when you launch an EC2 instance it is by default using On-Demand Pricing
  • On-Demand has no up-front payment and no long-term commitment
  • Reserved Instances (RI) up to 75% off (Best Long-term)
  • Use cases: steady-state, predicable usage, or require reserved capacity.
  • reduced pricing is base on Term x Class Offering x Payment Options
    • offering class
    • Standard - up to 75 % reduced pricing compared to on-demand. Cannot change RI Attributes
    • Convertible - up to 54 % reduced pricing compared to on-demand. Allows you to change RI Attributes if greater or equal in value.
    • Scheduled - you reserve instance for specific time periods eg one a week for a few hours
    • terms
    • 1 year or 3 years contract, the longer the term the greater saving
    • payment options
    • all upfront, partial upfront and no upfront
    • the great upfront the great the saving
  • RIs can be shared between multiple accounts within an org
  • unused RIs can be sold in the Reserved Instance Marketplace
  • Spot Instances up to 90% (Biggest Savings)
  • AWS has unused compute capacity that they want to maximize the utility of their idle servers
  • spot instances provide a discount of 90% compared to On-Demand Pricing
  • spot instances can be terminated by AWS if the computing capacity is needed by on-demand customers
  • Use case: for non-critical background jobs
  • Use case: can handle interruptions (server randomly stopping and starting)
  • designed for application that have flexible start and end times or application that are only feasible at very low computer costs
  • AWS Batch is an easy and convenient way to use Spot Pricing
  • Termination Conditions
    • instance can be terminated by AWS ant anytime
    • if your instance is terminated by AWS, you don't get charged for a partial hour of usage
    • if you terminate an instance you will still be charged for any hour that it ran
  • Dedicated (Most Expensive)
  • dedicated servers
  • can be on-demand or reserved (up to 70% off)
  • when you need a guarantee of isolate hardware (enterprise requirements)
  • designed to meet regulatory requirements. When you have to strict server-bound licensing that wan't support multi-tenancy or cloud deployments
  • Use case: when you need a guarantee of isolate hardware (enterprise requirements)
  • Dedicated Host Instance
    • is Single Tenant, when a single customer has dedicated hardware. Physical isolation is what separates customers
    • is NOT Multi-Tenant, when multiple customers are running workloads on the same hardware. Virtual Isolation is what separate customers
  • Enterprises and Large Organizations may have security concerns or obligations about against sharing the same hardware with other AWS Customers

EB - Elastic Beanstalk

EB - Description

  • Platform as a Service (PaaS), limited to certain programing languages or Docker
  • Deploy code consistently with a known architecture: ALB + EC2 + RDS
  • Elastic Beanstalk is a developer centric view of deploying an application on AWS
  • It uses all the component's we've seen before: EC2, ASG, ELB, RDS, etc...
  • Beanstalk = Platform as a Service (PaaS)
  • Managed service
  • Instance configuration / OS is handled by Beanstalk
  • Deployment strategy is configurable but performed by Elastic Beanstalk
  • Just the application code is the responsibility of the developer
  • Three architecture models:
  • Single Instance deploymnet: good for dev
  • LB + ASG: great for production and pre-production web applications
  • ASG only: great for non-web apps in production (workers, etc..)
  • Support for many platforms: Go, Java SE, Java with Tomcat, .NET on Windows Server with IIS, Node.js, PHP, Python, Ruby, Packer Builder, Single Container Docker, Multi-Container Docker, Preconfigured Docker
  • If not supported, you can write your custom platform (advanced)

LA - Lambda

LA - Description

Serverless function run code without provisioning or managing servers. You pay only for the compute time you consume

LA - Summary

  • runtime: node.js, python, java, c#, golang, ruby,
  • virtual function - no servers to manage
  • limited by time - short execution
  • run on-demand
  • scaling is automated
  • easy pricing
  • pay per request and compute time
  • free tier of 1,000,000 AWS Lambda requests and 400,000 GBs of compute time
  • pricing
    • pay per calls
    • first 1,000,000 requests are free
    • $0.20 per 1,000,000 requests
    • pay per duration (in increment of 100ms)
    • 400,000 GB-seconds of computing time per month if free
      • 400,000 seconds if function is 1 GB RAM
      • 3,200,000 seconds if function is 128 MB RAM
      • after that $1.00 for 600,000 GBs
  • integrated with the whole AWS suite of services
  • easy monitoring through AWS CloudWatch
  • easy to get more resources per function (up to 3GB of RAM)
  • increasing RAM will also improve CPU and networking