AWS Partner Network (APN) Blog

Have You Read Our 2016 AWS Partner Solutions Architect Guest Posts?

by Kate Miller | on | in Amazon DynamoDB, Amazon ECS, APN Competency Partner, APN Partner Highlight, APN Technical Content Launch, APN Technology Partners, Automation, AWS CloudFormation, AWS Lambda, AWS Marketplace, AWS Partner Solutions Architect (SA) Guest Post, AWS Product Launch, AWS Quick Starts, Big Data, Containers, Database, DevOps on AWS, Digital Media, Docker, Financial Services, Healthcare, NAT, Networking, Red Hat, SaaS on AWS, Security, Storage | | Comments

In 2016, we hosted 38 guest posts from AWS Partner Solutions Architects (SAs), who work very closely with both Consulting and Technology Partners as they build solutions on AWS. As we kick off 2017, I want to take a look back at all of the fantastic content created by our SAs. A few key themes emerged throughout SA content in 2016, including a focus on building SaaS on AWS, DevOps and how to take advantage of particular AWS DevOps Competency Partner tools on AWS, Healthcare and Life Sciences, Networking, and AWS Quick Starts.

Partner SA Guest Posts

There’ll be plenty more to come from our SAs in 2017, and we want to hear from you. What topics would you like to see our SAs discuss on the APN Blog? What would be most helpful for you as you continue to take advantage of AWS and build your business? Tell us in the comments. We look forward to hearing from you!

 

The Top 10 Most Popular APN Blog Posts of 2016

by Kate Miller | on | in AWS Partner Solutions Architect (SA) Guest Post, Partner Guest Post | | Comments

What a year it’s been! The goal of the APN Blog was to bring you information on all of the latest news from the APN throughout the year, while also delivering content on a number of technical topics developed by both AWS and APN Partners. Before we wrap up 2016, we want to take a moment and tell you about the most popular blogs published this year.

Without further ado, here are the top 10 most popular APN Blog posts published in 2016:

Stay tuned to the APN Blog throughout the next year for more news on the APN and content on a wide range of business and technical topics. Have a Happy New Year, and we will see you in 2017!

Financial Services Segment re:Invent Recap

by Kate Miller | on | in AWS Competencies, AWS Partner Solutions Architect (SA) Guest Post, Financial Services, re:Invent 2016 | | Comments

This is a guest post from Peter Williams. Peter is a Partner Solutions Architect (SA), and he focuses on the Financial Services segment. 

This year’s AWS re:Invent conference keynotes reminded us that we are at a seminal moment in the history of technological innovation. Businesses are transforming their operating model to take advantage of disruptive technologies enabled by AWS. While this is pertinent to every industry, I believe that it is especially true for Financial Services. Having traditionally been one of the more conservative industries with regard to cloud adoption, banks and insurance companies are now deciding to get out of the data center business and take advantage of the agility and cost savings of building on the AWS Cloud.

A Critical Mass for Financial Services

As Financial Services organizations have leveraged AWS’ pace of innovation and new offerings that simplify accessibility to technologies such as big data analytics, high performance computing and deep learning, a critical mass has formed. Leaps forward in time-to-market are becoming the new normal, replacing incremental evolutionary steps of recent years past. Capitalizing on the newfound elasticity and velocity, firms are enabled to respond to regulatory and customer needs at an unprecedented pace.

Banks and Insurers are now engaged in full-scale transformations to reduce the cost of infrastructure and other non-core competencies, and redirecting their technology investment to expanding and improving their capabilities to serve the customer and advance their market share.

Product and Program Launches

To support this industry transformation, many new capabilities and programs were launched at this year’s AWS re:Invent conference in Las Vegas.  First and foremost was the launch of the AWS Financial Services Competency.  This program helps customers identify and connect with industry-leading Consulting and Technology Partners with solutions for banking and payments, capital markets, and insurance. APN Partners who have achieved the AWS Financial Services Competency have demonstrated industry expertise, readily implemented solutions that align with AWS architectural best practices, and built a deep bench of AWS Trained & Certified individuals.

The launch of the AWS Partner Solutions Finder will also help customers more easily find APN Partners with expertise in the Financial Services industry. Customers can select by industry, use case, and AWS product of interest to identify APN Partners with depth in their area of need.

This re:Invent had no shortage of new product offerings that can help Financial Services organizations optimize their technology investment. Below, I’d like to discuss just a few of the product announcements and how they may impact Financial Services customers and partners. New compute capabilities enable richer functionality, such as the Amazon EC2 F1 Instance, now in preview, with field programmable gate arrays (FPGAs), which customers can program to create custom hardware accelerations for their applications.  New instance types were also announced for the R, T, I and C instance classes, bringing improvements to memory, compute, and I/O throughput.

New AWS service offerings include the fully managed ETL service AWS Glue, which simplifies and automates traditionally difficult and time consuming data discovery, conversion, mapping, and job scheduling tasks. AWS Glue guides you through the process of moving your data with an easy-to-use console that helps you understand your data sources, prepare the data for analytics, and load it reliably from data sources to destinations.

Customers have asked for tools to help them mine transactions, policies, and other types of data stored on Amazon S3. Amazon Athena helps to simplify this process. Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Large Financial Services firms often need tools to help them run thousands of batch jobs to support applications such as high-performance computing, post-trade analytics, and fraud surveillance. AWS Batch was announced to address these and many other use cases. AWS Batch enables developers to easily and efficiently run hundreds of thousands of computing jobs on AWS, and dynamically provisions the optimal quantity and type of compute resources based on the volume and specific resource requirements of the batch jobs submitted.

Trends for 2017

For Financial Services Partners, one of the key trends discussed at the re:Invent Partner Summit is the move to a software-as-a-service (SaaS) model. Prior to re:Invent, we launched AWS Marketplace SaaS Subscriptions, which you can learn more about here.

SaaS solutions can alleviate the need for customers to manage the software they use.  By eliminating the overhead of managing version upgrades, customers can reduce their total cost of ownership, while taking advantage of new features as soon as they are available. APN Partners can enjoy the competitive advantage of being able to make new features available to all customers without waiting for customer migrations, as well as the lower support cost of maintaining a single version of software. This will be a major driver in 2017 for many Financial Services Partners as they support banks and insurance companies.

Summary

We believe 2017 will be transformational for banking, capital markets, and insurance companies, as they continue to realize the benefits of moving to the AWS Cloud. Consulting Partners specializing in end-to-end cloud transformation can catalyze wide-scale adoption across firms transitioning to a new, more agile approach to technology delivery.  And we believe that Technology Partners will play an increasingly important role as customers use their products in new ways to capitalize on a new pace of innovation. Hear from two of our AWS Financial Services Competency Partners, EIS Group and IHS Markit, as they discuss why their customers are moving to AWS, and how customers take advantage of their software on AWS:

EIS Group:

IHS Markit:

Do you want to learn more about Financial Services on AWS? Visit our AWS Financial Services webpage. For more information about the AWS Financial Services Competency, click here.

 

Why Did Dynatrace Build a SaaS Solution on AWS?

by Kate Miller | on | in APN Partner Highlight, APN Partner Success Stories, APN Technology Partners, AWS Marketplace, DevOps on AWS, Migration, SaaS on AWS | | Comments

Dynatrace is an Advanced APN Technology Partner, and holds the AWS Migration and DevOps Competencies. The company recently began offering its cloud application performance management service directly through AWS Marketplace, as a part of the recent AWS Marketplace SaaS Subscriptions launch.

We recently caught up with John Van Siclen, CEO of Dynatrace, and Alois Reitbauer, VP, Chief Technology Strategist of Dynatrace, to learn more about why they chose to build a SaaS solution on AWS, and the value of becoming an APN Partner. Take a look:

To learn more about Dynatrace, click here.

Join Us at the AWS Partner Summit in Canada!

by Kate Miller | on | in AWS Events, AWS Marketing, AWS Partner Summits, Canada | | Comments

It’s an exciting time to be an AWS Partner Network (APN) Partner in Canada! We recently announced the launch of the AWS Canada (Central) Region, and on January 24th, we’ll be hosting the very first AWS Partner Summit in Canada.

Canada_Partner_Summit_2017

This event is free, exclusive to APN members, and is your chance to hear from the AWS leadership team and learn how to build successful AWS-based businesses and solutions.

To learn more about the agenda for the AWS Partner Summit in Canada and to register for the event, click here. Seating is limited. Stay tuned to the APN Blog for more information to come about the event in early January!

2016 Technical Recap: Healthcare and Life Sciences

by Kate Miller | on | in AWS Partner Solutions Architect (SA) Guest Post, Healthcare, Life Sciences, re:Invent 2016 | | Comments

By Aaron Friedman. Aaron Friedman is a Healthcare & Life Sciences Solutions Architect for Amazon Web Services.

What an exciting time to be building healthcare and life science solutions on the AWS Cloud! One of my favorite things about my job is seeing the industry-shaping solutions that our HCLS partners are building. To highlight some of these solutions, we hosted our first pre-day at re:Invent 2016 with both Healthcare and Life Sciences tracks featuring many of our partners, including ClearDATA, Cerner, and 1Strategy. Many other partners, such as OpenEye, gave HCLS talks throughout the week. All of these talks, and more, are available on YouTube.

In the past several weeks alone, there have been myriad launch announcements of new services, and updates to existing services, that will likely become part of your HCLS solutions on AWS. We have already heard excellent feedback about how our HCLS partners intend to use many of these services, such as Amazon Athena, as part of their analytics pipeline. Please note that new services are not necessarily HIPAA-eligible; you can learn of all HIPAA-eligible services here. However, you can use any AWS service in a HIPAA-compliant application if it does not touch PHI. If you have any questions, please do not hesitate to reach out to an AWS Solutions Architect and we are happy to help.

While there are too many announcements to cover each in detail, I wanted to highlight some 2016 launch announcements that I am very excited to see how our partners use in 2017.

DevSecOps. As our CTO, Werner Vogels, is fond of saying, security will always be our number one priority. So we’ll start with that! One of the things we have seen over 2016 is a focus in developing products for our partners interested in security and a highly integrated factory for delivering code to production. This is especially germane to many of our HCLS partners who are developing software for compliant workloads (HIPAA, regulated workloads in biopharma, etc.).

With regards to compliance, one of the things that HCLS partners focus on is traceability. An auditor of your system might want to know who developed a specific portion of your software, what your environment looked like on a specific date, and the corresponding controls and tests that you have put in place to enforce these standards.

I am particularly excited about the growth of AWS Config, which allows you to monitor your resources and get configuration change notifications to enable security and governance. Managed Config rules allow you to get started easier with monitoring your environment, and move towards automating compliance. In a similar vein, we recently announced support for AWS CloudFormation in our continuous delivery service, AWS CodePipeline. Not only can you use continuous integration/continuous delivery to test your software (built with AWS CodeBuild), but you can now validate changes to your environment as part of your CI/CD process.

Traceability is not only important for compliance, but also for the customer experience. In this vein, we launched AWS X-Ray, which allows you to dive deep into your applications and understand how its underlying services are performing. You are able to quickly and easily detect where issues are occurring and correct the root issues in your software. This allows you to tune every portion of your application, and deliver even better customer experiences while operating in a compliant environment.

Data Analytics and Business Intelligence. 2016 has seen rapid delivery of AWS products and services that enable you to more easily derive insight from your data. Often built on top of data lakes on AWS, both population health analytics and real world evidence platforms are becoming increasingly common. At re:Invent, we announced several services that will allow you better query that data and derive meaningful insights. Below is one example of how I see these services working together in concert in your HCLS solutions.

AWS Batch, in preview, is a set of batch management capabilities that enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch facilitates high-throughput batch analytics. By defining your analytics modules in Docker containers, for instance, you can quickly and easily build out complex high-throughput batch analytics. Two HCLS examples we can see this service being widely used by HCLS partners are (1) genomics secondary analysis and (2) virtual high-throughput screening. Here is an example architecture of how one might use AWS Batch for genomic analysis:

HCLS Post_2016_Technical_Recap

AWS Glue, coming soon, will be critical for connecting the data generated from these batch analytics with many other data source for your real-world evidence and healthcare analytics platforms. With AWS Glue, you can integrate data from genomics, anonymized health information, metabolomics, microbiomics, and proteomics into a central repository of knowledge to derive insights from. Not only will Glue allow you to automate your ETL processes, but it will also help you better understand your data sources and suggest database schema and transformations so you can focus more on data discovery, rather than data wrangling.

Once you have your data sources organized, you can then use Amazon Athena, a managed interactive query service, to easily analyze petabytes of data in your data lake on S3, which you may have generated with AWS Batch. We envision our customers using Athena to explore their population-scale (e.g. health or genomics) datasets through low-latency queries, which will inform more complex analytic models, such as ones built with our Deep Learning AMI and P2-class GPU instances.

After you have analyzed your data and built the appropriate models with Athena, EMR, and machine learning, you can then serve that data with Amazon QuickSight to your organization so that it can get the appropriate business insights from your data. QuickSight should be very valuable to our partners with GxP workloads, such as with operation analytics and supply chain management.

HIPAA-Eligible Services. We hope that HCLS partners are as excited as we are to see the expansion of HIPAA-eligibility in database (RDS for PostgreSQL and Aurora), as well as storage (Snowball). We are looking forward to seeing how HCLS partners are able to leverage these newly-eligible services for storing, transmitting, and processing PHI. Partners and customers may use any AWS service in an account designated as a HIPAA account, but they should only process, store and transmit PHI in the HIPAA-eligible services defined in the BAA. Learn more here.

It’s Still Day One

It’s truly an exciting time to be a Healthcare and Life Sciences partner at AWS, and with all of these services and more, it always feels like Day One. We are consistently listening to our partners, and seek to build products that improves both their experience as well as that of their customers.

If you are interested in learning more about our Healthcare and Life Sciences partners, be sure to check out our Competency Partners, as well as explore our newly launched AWS Partner Solutions Finder.

Please leave any questions or comments below. I’d love to hear from you.

How to Leverage APN Marketing Central – A New Guide on the APN Portal

by Kate Miller | on | in APN Content Launch, AWS Marketing | | Comments

Are you looking to do co-marketing with AWS?

APN Marketing Central provides marketing tools and resources that enable you to generate demand for your solutions on AWS. As a benefit for Standard tier and above APN Partners, access self-service marketing campaigns that allow you to quickly co-brand and launch solution-based campaigns or engage participating agencies for select marketing services.

This week, we published “How to Leverage APN Marketing Central”, a new guide that walks you through how you can get started taking advantage of APN Marketing Central. Download the PDF on the APN Portal.

Deploy to Production using AWS CodeBuild and the AWS Developer Tools Suite

by Kate Miller | on | in APN Consulting Partners, AWS CloudFormation, AWS CodeBuild, AWS CodeCommit, AWS Competencies, DevOps on AWS, Guest Post, Partner Guest Post, re:Invent 2016 | | Comments

This is a guest post from Paul Duvall, Stelligent, with contributions from Brian Jakovich and Jonny Sywulak, Stelligent. Paul Duvall is CTO at Stelligent, and an AWS Community Hero

Stelligent is an AWS DevOps Competency Partner. 

At re:Invent 2016, AWS announced a new fully managed service called AWS CodeBuild that allows you to build your software. Using CodeBuild, you can build code using pre-built images for Java, Ruby, Python, Golang, Docker, Node, and Android or use your own customize images for other environments without provisioning additional compute resources and configuration. This way you can focus more time on developing your application or service features for your customers.

In our previous post, An Introduction to CodeBuild, we described the purpose of AWS CodeBuild, its target users, and how to setup an initial CodeBuild project. In this post, you will learn how to integrate and automate the orchestration of CodeBuild with the rest of the AWS Developer Tools suite – including AWS CodeDeploy, AWS CodeCommit, and AWS CodePipeline using AWS’ provisioning tool, AWS CloudFormation. By automating all of the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so. You’ll see an example that walks you through the process along with a detailed screencast that shows you every step in launching the solution and testing the deployment.

Figure 1 shows this deployment pipeline in action.

Figure 1 – CodePipeline building with CodeBuild and deploying with CodeDeploy using source assets in CodeCommit

Keep in mind that CodeBuild is a building block service you can use for executing build, static analysis, and test actions that you can integrate into your deployment pipelines. You use an orchestration tool like CodePipeline to model the workflow of these actions along with others such as polling a version-control repository, provisioning environments, and deploying software.

Prerequisites

Here are the prerequisites for this solution:

These prerequisites will be explained in greater detail in the Deployment Steps section.

Architecture and Implementation

In Figure 2, you see the architecture for launching a deployment pipeline that gets source assets from CodeCommit, builds with CodeBuild, and deploys software to an EC2 instance using CodeDeploy. You can click on the image to launch the template in CloudFormation Designer.

Figure_2_Post_2_Stelligent_CodeBuild

Figure 2 – Architecture of CodeBuild, CodePipeline, CodeDeploy, and CodeCommit solution

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project
  • AWS CodeCommit – Creates a CodeCommit Git repository using the AWS::CodeCommit::Repository
  • AWS CodeDeploy – CodeDeploy automates the deployment to the EC2 instance that was provisioned by the nested stack using the AWS::CodeDeploy::Application and AWS::CodeDeploy::DeploymentGroup
  • AWS CodePipeline – I’m defining CodePipeline’s stages and actions in CloudFormation code which includes using CodeCommit as a source action, CodeBuild as a build action, and CodeDeploy for a deploy action (For more information, see Action Structure Requirements in AWS CodePipeline)
  • Amazon EC2 – A nested CloudFormation stack is launched to provision multiple EC2 instances on which the CodeDeploy agent is installed. The CloudFormation template called through the nested stack is provided by AWS.
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline can access.
  • AWS SNS – Provisions a Simple Notification Service (SNS) Topic using the AWS::SNS::Topic The SNS topic is used by the CodeCommit repository for notifications.

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including EC2, IAM, and SNS. You can find a link to the CloudFormation template at the bottom of this post.

CodeBuild

AWS CloudFormation has provided CodeBuild support from day one. Using the AWS::CodeBuild::Project resource, you can provision your CodeBuild project in code as shown in the sample below.

    "CodeBuildJavaProject":{
      "Type":"AWS::CodeBuild::Project",
      "DependsOn":"CodeBuildRole",
      "Properties":{
        "Name":{
          "Ref":"AWS::StackName"
        },
        "Description":"Build Java application",
        "ServiceRole":{
          "Fn::GetAtt":[
            "CodeBuildRole",
            "Arn"
          ]
        },
        "Artifacts":{
          "Type":"no_artifacts"
        },
        "Environment":{
          "Type":"linuxContainer",
          "ComputeType":"BUILD_GENERAL1_SMALL",
          "Image":"aws/codebuild/java:openjdk-8"
        },
        "Source":{
          "Location":{
            "Fn::Join":[
              "",
              [
                "https://git-codecommit.",
                {
                  "Ref":"AWS::Region"
                },
                ".amazonaws.com/v1/repos/",
                {
                  "Ref":"AWS::StackName"
                }
              ]
            ]
          },
          "Type":"CODECOMMIT"
        },
        "TimeoutInMinutes":10,
        "Tags":[
          {
            "Key":"Owner",
            "Value":"JavaTomcatProject"
          }
        ]
      }
    },

The key attributes, blocks, and values of the CodeBuild CloudFormation resource are defined here:

  • Name – Define the unique name for the project. In my CloudFormation template, I’m using the stack name as a way of uniquely defining the CodeBuild project without requiring user input.
  • ServiceRole – Refer to the previously-created IAM role resource that provides the proper permissions to CodeBuild.
  • Environment Type – The type attribute defines the type of container that CodeBuild uses to build the code.
  • Environment ComputeType – The compute type defines the CPU cores and memory the build environment uses
  • Environment Image – The image is the programming platform on which the environment runs.
  • Source Location and Type – In Source, I’m defining the CodeCommit URL as the location along with the type. Along with the CODECOMMIT type, CodeBuild also supports S3 and GITHUB. In defining CodeCommit as the type, CodeBuild automatically searches for a yml file in the root directory of the source repository. See the Build Specification Reference for AWS CodeBuild for more detail.
  • TimeoutInMinutes – This is the amount of time before the CodeBuild project will cease running. This modifies from the default of 60 minutes to 10 minutes.
  • Tags – I can define multiple tag types for the CodeBuild project. In this example, I’m defining the team owner.

For more information, see the AWS::CodeBuild::Project resource documentation.

CodeCommit

With CodeCommit, you can provision a fully managed private Git repository that integrates with other AWS services such as CodePipeline and IAM. To automate the provisioning of a new CodeCommit repository, you can use the AWS::CodeCommit::Repository CloudFormation resource. You can create a trigger to receive notifications when the master branch gets updated using an SNS Topic as a dependent resource that is created in the same CloudFormation template. For a more detailed example and description, see Provision a hosted Git repo with AWS CodeCommit using CloudFormation.

CodeDeploy

AWS CodeDeploy provides a managed service to help you automate and orchestrate software deployments to Amazon EC2 instances or those that run on-premises.

To configure CodeDeploy in CloudFormation, you use the AWS::CodeDeploy::Application and AWS::CodeDeploy::DeploymentGroup resources.

CodePipeline

While you can create a deployment pipeline for CodePipeline in CloudFormation by directly writing the configuration code, we often recommend that customers manually create the initial pipeline using the CodePipeline console and then once it’s established run the get-pipeline command (as shown below) to get the proper CodePipeline configuration to use in defining the CloudFormation template. To create a pipeline using the console, follow the steps in the Simple Pipeline Walkthrough. Choose CodeCommit as a source provider, CodeBuild as a build provider and CodeDeploy as a deploy provider.

In the following snippet, you see how you can use the AWS::CodePipeline::Pipeline resource to define the deployment pipeline in CodePipeline. A snippet of this configuration is shown below.

 "CodePipelineStack":{
      "Type":"AWS::CodePipeline::Pipeline",
      "Properties":{
      ...
        "Stages":[
...

Once the CodePipeline has been manually created using the AWS console, you can run the following command to get the necessary resource configuration that can be copied and modified in CloudFormation. Replace PIPELINE-NAME with the name of the pipeline that you manually created.

aws codepipeline get-pipeline --name PIPELINE-NAME

You will get the configuration output using this command. You can add this configuration to the CodePipeline resource configuration in CloudFormation. You’ll need to modify the attribute names from lowercase to title case.

In configuring the CodeBuild action for the CodePipeline resource, the most relevant section is in defining the ProjectName as shown in the snippet below.

  "ProjectName":{
    "Ref":"CodeBuildJavaProject"
  }
},
…

CodeBuildJavaProject references the CodeBuild project resource defined previously in the template.

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodeCommit – If used on a small project of less than six users, there’s no additional cost. See AWS CodeCommit Pricing for more information.
  • CodeDeploy – No additional cost.
  • CodePipeline – $1 a month per pipeline unless you’re using it as part of the free tier. For more information, see AWS CodePipeline pricing.
  • EC2 – There are a number of Instance types and pricing options. See Amazon EC2 Pricing for more information.
  • IAM – No additional cost.
  • SNS – Considering you likely won’t have over 1 million Amazon SNS requests for this particular solution, there’s no cost. For more information, see AWS SNS Pricing.

So, for this particular sample solution, if you just run it once and terminate it within the day, you’ll spend a little over $1 or even less if your CodePipeline usage is eligible for the AWS Free Tier.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution. 

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region
  3. Create a key pair. To do this, in the navigation pane of the Amazon EC2 console, choose Key Pairs, Create Key Pair, type a name, and then choose Create.

Step 2. Launch the Stack

Click on the Launch Stack button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

 

 

Time to deploy: Approximately 7 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

Click on the CodePipelineURL Output in your CloudFormation stack. You’ll see that the pipeline has failed on the Source action. This is because the Source action expects a populated repository and it’s empty. The way to resolve this is to commit the application files to the newly-created CodeCommit repository. First, you’ll need to clone the repository locally. To do this, get the CloneUrlSsh Output from the CloudFormation stack you launched in Step 2. A sample command is shown below. You’ll replace {CloneUrlSsh} with the value from the CloudFormation stack output. For more information on using SSH to interact with CodeCommit, see the Connect to the CodeCommit Repository section at: Create and Connect to an AWS CodeCommit Repository.

{CloneUrlSsh}
cd {localdirectory}

Once you’ve cloned the repository locally, download the sample application files from the aws-codedeploy-sample-tomcat Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following to commit and push the new files to the CodeCommit repository:

git add . 
git commit -am "add all files from the AWS Java Tomcat CodeDeploy application" 
git push

Once these files have been committed, the pipeline will discover the changes in CodeCommit and run a new pipeline instance and all stages and actions should succeed as a result of this change. It takes approximately 3-4 minutes to complete all stages and actions in the pipeline.

Access the Application and Pipeline Resources

Once the CloudFormation stack has successfully completed, select the stack and go to the Outputs tab and click on the CodePipelineURL output value. This will launch the deployment pipeline in CodePipeline console. Go to the Deploy action and click on the Details link. Next, click on the link for the Deployment Id of the CodeDeploy deployment. Then, click on the link for the Instance Id. From the EC2 instance, copy the Public IP value and paste into your browser and hit enter to launch the Java sample application – as displayed in Figure 3.

Figure 3 – Deployed Java Application

You can access the Source and Build using the CodePipeline Action Details. For example, go to the pipeline and click on commit id for the Source action and click on the Details to the Build action. See Figure 4 for a detailed illustration of this pipeline.

Figure 4 – CodePipeline with Action Details

There are also direct links for the CodeCommit, CodeBuild, and CodeDeploy resources in the CloudFormation Outputs as well.

Commit Changes to CodeCommit

Make some visual modifications to the src/main/webapp/WEB-INF/pages/index.jsp page and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned the local version of your CodeCommit repo (in the directory created by your git clone command). To push these changes to the remote repository, see the commands below.

git add .
git commit -am "modify front page for AWS sample Java Tomcat CodeDeploy application"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser. Once deployed, you should see the modifications you made in the application upon entering the URL – as shown in Figure 5.

Figure 5 – Deployed Java Application with Changes Committed to CodeCommit, Built with CodeBuild, and Deployed with CodeDeploy

How-to Video

In this video, I walk through the deployment steps described above.

Additional Resources

Summary

In this post, you learned how to define and launch a CloudFormation stack capable of provisioning a fully-codified continuous delivery solution using CodeBuild. Additionally, the example included the automation of a CodePipeline deployment pipeline – which included the CodeCommit, CodeBuild, and CodeDeploy integration.

Furthermore, I described the prerequisites, architecture, implementation, costs, and deployment steps of the solution.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/aws-codedeploy-sample-tomcat. Let us know if you have any comments or questions @stelligent.


The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

Introducing AWS Managed Services

by Kate Miller | on | in APN Consulting Partners, AWS Product Launch, Cloud Managed Services, Migration, MSPs on AWS | | Comments

This is a guest post from Forest Johns, Principal Product Manager at AWS.

Today we are launching an operations management service called AWS Managed Services, as announced in Jeff Barr’s AWS Blog. Designed and built based on requests and feedback from some of our largest Enterprises, AWS Managed Services (AWS MS) provides customers with an alternative to in-house and outsource data center operations management.

What is AWS MS?

AWS MS follows IT Service management best practices, and standard features include patch management, backup, monitoring, security, and operational process for incident, change, and problem management. At launch, the service supports 23 AWS services, and is available in four AWS Regions including US East (Northern Virginia), US West (Oregon), Asia Pacific (Sydney), and EU (Ireland). AWS MS provides prescriptive guidance for data center deployment in the AWS Cloud at scale, and provides standard APIs, stack templates, and automation for common operations.

What’s the Distinction Between AWS MS and the AWS Managed Services Program?

AWS Managed Services is not to be confused with the AWS Managed Services Program, which thoroughly vets an APN Partner’s own managed services offerings and next-generation cloud managed services capabilities. AWS MSP Partners undergo a rigorous validation audit of over 80 checks that includes capabilities around application migration, DevOps, CI/CD, security, as well as cloud and application management. They also have many years of experience in providing full lifecycle migration, integration, cloud management, application management and application development. In addition to targeting Enterprises, the AWS MS offering was also built to enable AWS Managed Service Providers to augment or replace their existing AWS infrastructure management capabilities, allowing them to focus on migration and application management work for their clients.

What’s the Role of AWS Consulting Partners in AWS MS?

APN Partners were key in the development of this service, and play an active role in the deployment and use of AWS MS. Having a standard operating environment not only fast tracks customer onboarding, but creates many different opportunities for APN Partners to enable and add value for AWS MS customers. In the coming weeks, we will also be launching a new AWS Managed Services designation as part of the AWS Services Delivery Program for APN Partners (stay tuned to the APN Blog for more information to come).

Key to the integration and deployment of AWS MS, AWS Consulting Partners enable Enterprises to migrate their existing applications to AWS and integrate their on-premises management tools with their cloud deployments. Consulting Partners will also be instrumental in building and managing cloud-based applications for customers running on the infrastructure stacks managed by AWS MS. Onboarding to AWS MS typically requires 8-10 weeks of design/strategy, system/process integration, and initial app migration, all of which can be performed by qualified AWS Consulting Partners. In order to participate, APN Partners will need to complete either the AWS Managed Service Provider Program validation process, and/or earn the Migration or DevOps Competency, as well as complete the specialized AWS MS partner training.

Learn More

We invite APN Partners who are interested in becoming involved in offering the AWS Managed Services to email us at aws-managed-services-inquiries@amazon.com.

Watch the re:Invent Global Partner Summit Keynote and GPS Sessions on the APN Portal

by Kate Miller | on | in APN Content Launch, re:Invent 2016 | | Comments

Were you unable to catch the AWS Global Partner Summit keynote at re:Invent? Were there Global Partner Summit breakout sessions you weren’t able to attend? You can now view the keynote and the sessions on-demand through the APN Portal!

SHP_4348

To view the AWS Global Partner Summit keynote, click here.

We hosted a number of business and technical breakout sessions at the Global Partner Summit. To view all of the AWS Global Partner Summit breakout sessions, click here. What follows are links to specific sessions: