AWS Blog

New – Create an Amazon Aurora Read Replica from an RDS MySQL DB Instance

by Jeff Barr | on | in Amazon Aurora, Amazon RDS | | Comments

Migrating from one database engine to another can be tricky when the database is supporting an application or a web site that is running 24×7. Without the option to take the database offline, an approach that is based on replication is generally the best solution.

Today we are launching a new feature that allows you to migrate from an Amazon RDS DB Instance for MySQL to Amazon Aurora by creating an Aurora Read Replica. The migration process begins by creating a DB snapshot of the existing DB Instance and then using it as the basis for a fresh Aurora Read Replica. After the replica has been set up, replication is used to bring it up to date with respect to the source. Once the replication lag drops to 0, the replication is complete. At this point, you can make the Aurora Read Replica into a standalone Aurora DB cluster and point your client applications at it.

Migration takes several hours per terabyte of data, and works for MySQL DB Instances of up to 6 terabytes. Replication runs somewhat faster for InnoDB tables than it does for MyISAM tables, and also benefits from the presence of uncompressed tables. If migration speed is a factor, you can improve it by moving your MyISAM tables to InnoDB tables and uncompressing any compressed tables.

To migrate an RDS DB Instance, simply select it in the AWS Management Console, click on Instance Actions, and choose Create Aurora Read Replica:

Then enter your database instance identifier, set any other options as desired, and click on Create Read Replica:

You can monitor the progress of the migration in the console:

After the migration is complete, wait for the Replica Lag to reach zero on the new Aurora Read Replica (use the SHOW SLAVE STATUS command on the replica and look for “Seconds behind master”) to indicate that the replica is in sync with the source, stop the flow of new transactions to the source MySQL DB Instance, and promote the Aurora Read Replica to a DB cluster:

Confirm your intent and then wait (typically a minute or so) until the new cluster is available:

Instruct your application to use the cluster’s read and write endpoints, and you are good to go!

Jeff;

 

From Raspberry Pi to Supercomputers to the Cloud: The Linux Operating System

by Ana Visneski | on | in AWS Marketplace, Linux |

Matthew Freeman and Luis Daniel Soto are back talking about the use of Linux through the AWS Marketplace.
– Ana


Linux is widely used in corporations now as the basis for everything from file servers to web servers to network security servers. The no-cost as well as commercial availability of distributions makes it an obvious choice in many scenarios. Distributions of Linux now power machines as small as the tiny Raspberry Pi to the largest supercomputers in the world. There is a wide variety of minimal and security hardened distributions, some of them designed for GPU workloads.

Even more compelling is the use of Linux in cloud-based infrastructures. Its comparatively lightweight architecture, flexibility, and options for customizing it make Linux ideal as a choice for permanent network infrastructures in the cloud, as well as specialized uses such as temporary high-performance server farms that handle computational loads for scientific research. As a demonstration of their own commitment to the Linux platform, AWS developed and continues to maintain their own version of Linux that is tightly coupled with AWS services.

AWS has been a partner to the Linux and Open Source Communities through AWS Marketplace:

  • It is a managed software catalog that makes it easy for customers to discover, purchase, and deploy the software and services they need to build solutions and run their businesses.
  • It simplifies software licensing and procurement by enabling customers to accept user agreements, choose pricing options, and automate the deployment of software and associated AWS resources with just a few clicks.
  • It can be searched and filtered to help you select the Linux distribution – independently or in combination with other components – that best suits your business needs.

Selecting a Linux Distribution for Your Company
If you’re new to Linux, the dizzying array of distributions can be overwhelming. Deciding which distribution to use depends on a lot of different factors, and customers tell us that the following considerations are important to them:

  • Existing investment in Linux, if any. Is this your first foray into Linux? If so, then you’re in a position to weight all options pretty equally.
  • Existing platforms in use (such as on-premises networks). Are you adding a cloud infrastructure that must connect to your in-house network? If so, you need to consider which of the Linux distributions has the networking and application connectors you require.
  • Intention to use more than one cloud platform. Are you already using another cloud provider? Will it need to interconnect with AWS? Your choice of Linux distribution may be affected by what’s available for those connections.
  • Available applications, libraries, and components. Your choice of Linux distribution should take into consideration future requirements, and ongoing software and technical support.
  • Specialized uses, such as scientific or technical requirements. Certain applications only run on specific, customized Linux distributions.

By examining your responses to each of these areas, you can narrow the list of possible Linux distributions to suit your business needs.

Linux in AWS Marketplace
AWS Marketplace is a great place to locate and begin using Linux distributions along with the top applications that run on them. You can deploy different versions of the distributions from this online store, and AWS scans the catalog daily for security, if we found an issue we notify you — this increases your speed. Scans are run continuously to identify vulnerabilities. AWS notifies customers of any issues found and works with experts to find work-arounds and updates. In addition to support provided by the sellers, the AWS Forums are a great place to ask questions about using Linux on AWS by setting up a free account on the forum. You can also get further details about Linux on AWS from the AWS Documentation.

Applications from AWS Marketplace Running on Linux
Here is a sampling of the featured Linux distributions and applications that run on them, which customers launch from AWS Marketplace.

CentOS Versions 7, 6.5, and 6
The CentOS Project is a community-driven, free software effort focused on delivering a robust open source ecosystem. CentOS is derived from the sources of Red Hat Enterprise Linux (RHEL), and it aims to be functionally compatible with RHEL. CentOS Linux is no-cost to use, and free to redistribute. For users, CentOS offers a consistent, manageable platform that suits a wide variety of deployments. For open source communities, it offers a solid, predictable base to build upon, along with extensive resources to build, test, release, and maintain their code. AWS has several CentOS AMIs that you can launch to take advantage of the stability and widespread use of this distribution.

Debian GNU Linux
Debian GNU/Linux, which includes the GNU OS tools and Linux kernel, is a popular and influential Linux distribution. Users have access to repositories containing thousands of software packages ready for installation and use. Debian is known for relatively strict adherence to the philosophies of Unix and free software as well as using collaborative software development and testing processes. It is popular as a web server operating system. Debian officially contains only free software, but non-free software can be downloaded from the Debian repositories and installed. Debian focuses on stability and security, and is used as a base for many other distributions. AWS has AMIs for Debian available for launch immediately.

Amazon Linux AMI
Amazon Linux is a supported and maintained Linux image provided by AWS. Amazon EC2 Container Service makes it easy to manage Docker containers at scale by providing a centralized service that includes programmatic access to the complete state of the containers and Amazon EC2 instances in the cluster, schedules containers in the proper location, and uses familiar Amazon EC2 features like security groups, Amazon EBS volumes, and IAM roles. Amazon ECS allows you to make containers a foundational building block for your applications by eliminating the need to run a cluster manager, and by providing programmatic access to the full state of your cluster.

Other popular distributions available in AWS Marketplace include Ubuntu, SUSE, Red Hat, Oracle Linux, Kali Linux and more.

Getting Started with Linux on AWS Marketplace
You can view a list hundreds of Linux offerings by simply selecting the Operating System category from the Shop All Categories link on the AWS Marketplace home screen.

From there you can select your preferred distribution and browse the available offerings:

Most offerings include the ability to launch using 1-Click, so your Linux server can be up and running in minutes.

Flexibility with Pay-As-You-Go Pricing
You pay Amazon EC2 usage costs plus per hour (or per month or annual) and, if applicable, commercial Linux cost for certain distributions directly through your AWS account. You can see in advance what your costs will be, depending on the instance type you select. As a result, using AWS Marketplace is one of the fastest and easiest ways to launch your Linux solution.

Visit http://aws.amazon.com/mp/linux to learn more about Linux on AWS Marketplace.

Matthew Freeman and Luis Daniel Soto

 

Ready-to-Run Solutions: Open Source Software in AWS Marketplace

by Ana Visneski | on | in AWS Marketplace | | Comments

There are lot’s of exciting things going on in the AWS Marketplace. Here to tell you more about open source software in the marketplace are Matthew Freeman and Luis Daniel Soto.

– Ana


According to industry research, enterprise use of open source software (OSS) is on the rise. More and more corporate-based developers are asking to use available OSS libraries as part of ongoing development efforts at work. These individuals may be using OSS in their own projects (i.e. evenings and weekends), and naturally want to bring to work the tools and techniques that help them elsewhere.

Consequently, development organizations in all sectors are examining the case for using open source software for applications within their own IT infrastructures as well as in the software they sell. In this Overview, we’ll show you why obtaining your open source software through AWS makes sense from a development and fiscal perspective.

Open Source Development Process
Because open source software is generally developed in independent communities of participants, acquiring and managing software versions is usually done through online code repositories. With code coming from disparate sources, it can be challenging to get the code libraries and development tools to work well together. But AWS Marketplace lets you skip this process and directly launch EC2 instances with the OSS you want. AWS Marketplace also has distributions of Linux that you can use as the foundation for your OSS solution.

Preconfigured Stacks Give You an Advantage
While we may take this 1-Click launch ability for granted with commercial software, for OSS, having preconfigured AMIs is a huge advantage. AWS Marketplace gives software companies that produce combinations or “stacks” of the most popular open source software a location from which these stacks can be launched into the AWS cloud. Companies such as TurnKey and Bitnami use their OSS experts to configure and optimize these code stacks so that the software works well together. These companies stay current with new releases of the OSS, and update their stacks accordingly as soon as new versions are available. Some of these companies also offer cloud hosting infrastructures as a paid service to make it even easier to launch and manage cloud-based servers.

As an example, one of the most popular combinations of open source software is the LAMP stack, which consists of a Linux distribution, Apache Web Server, a MySQL database, and the PHP programming library. You can select a generic LAMP stack based on the Linux distribution you prefer, then install your favorite development tools and libraries.

You would then add to it any adjustments to the underlying software that you need or want to make for your application to run as expected. For example, you may want to change the memory allocations for the application, or change the maximum file upload size in the PHP settings.

You could select an OSS application stack that contains the LAMP elements plus a single application such as WordPress, Moodle, or Joomla!®. These stacks would be configured by the vendor with optimal settings for that individual application so that it runs smoothly, with sufficient memory and disk allocations based on the application requirements. This is where stack vendors excel in providing added value to the basic software provisioning.

You might instead choose a generic LAMP stack because you need to combine multiple applications on a single server that use common components. For example, WordPress has plugins that allow it to interoperate with Moodle directly. Both applications use Apache Web Server, PHP, and MySQL. You save time by starting with the LAMP stack, and configuring the components individually as needed for WordPress and Moodle to work well together.

These are just 2 real-world examples of how you could use a preconfigured solution from AWS Marketplace and adapt it to your own needs.

OSS in AWS Marketplace
AWS Marketplace is one of the largest sites for obtaining and deploying OSS tools, applications, and servers. Here are some of the other categories in which OSS is available.

  • Application Development and Test Tools. You can find on AWS Marketplace solutions and CloudFormation templates for EC2 servers configured with application frameworks such as Zend, ColdFusion, Ruby on Rails, and Node.js. You’ll also find popular OSS choices for development and testing tools, supporting agile software development with key product such as Jenkins for test automation, Bugzilla for issue tracking, Subversion for source code management and configuration management tools. Learn more »
  • Infrastructure Software. The successful maintenance and protection of your network is critical to your business success. OSS libraries such as OpenLDAP and OpenVPN make it possible to launch a cloud infrastructure to accompany or entirely replace an on-premises network. From offerings dedicated to handling networking and security processing to security-hardened individual servers, AWS Marketplace has numerous security solutions available to assist you in meeting the security requirements for different workloads. Learn more »
  • Database and Business Intelligence. Including OSS database, data management and open data catalog solutions. Business Intelligence and advanced analytics software can help you make sense of the data coming from transactional systems, sensors, cell phones, and a whole range of Internet-connected devices. Learn more »
  • Business Software. Availability, agility, and flexibility are key to running business applications in the cloud. Companies of all sizes want to simplify infrastructure management, deploy more quickly, lower cost, and increase revenue. Business Software running on Linux provides these key metrics. Learn more »
  • Operating Systems. AWS Marketplace has a wide variety of operating systems from FreeBSD, minimal and security hardened Linux installations to specialized distributions for security and scientific work. Learn more »

How to Get Started with OSS on AWS Marketplace
Begin by identifying the combination of software you want, and enter keywords in the Search box at the top of the AWS Marketplace home screen to find suitable offerings.

Or if you want to browse by category, just click “Shop All Categories” and select from the list.

Once you’ve made your initial search or selection, there are nearly a dozen ways to filter the results until the best candidates remain. For example, you can select your preferred Linux distribution by expanding the All Linux filter to help you find the solutions that run on that distribution. You can also filter for Free Trials, Software Pricing Plans, EC2 Instance Types, AWS Region, Average Rating, and so on.

Click on the title of the listing to see the details of that offering, including pricing, regions, product support, and links to the seller’s website. When you’ve made your selections, and you’re ready to launch the instance, click Continue, and log into your account.

Because you log in, AWS Marketplace can detect the presence of existing security groups, key pairs, and VPC settings. Make adjustments on the Launch on EC2 page, then click Accept Software Terms & Launch with 1-Click, and your instance will launch immediately.

If you prefer you can do a Manual Launch using the AWS Console with the selection you’ve made, or start the instance using the API or command line interface (CLI). Either way, your EC2 instance is up and running within minutes.

Flexibility with Pay-As-You-Go Pricing
You pay Amazon EC2 usage costs plus per hour (or per month or annual) and, if applicable, commercial open source software fees directly through your AWS account. As a result, using AWS Marketplace is one of the fastest and easiest ways to get your OSS software up and running.

Visit http://aws.amazon.com/mp/oss to learn more about open source software on AWS Marketplace.

Matthew Freeman, Category Development Lead, AWS Marketplace
Luis Daniel Soto, Sr. Category GTM Leader, AWS Marketplace

AWS Lambda – A Look Back at 2016

by Tara Walker | on | in Amazon API Gateway, AWS Lambda | | Comments

2016 was an exciting year for AWS Lambda, Amazon API Gateway and serverless compute technology, to say the least. But just in case you have been hiding away and haven’t heard of serverless computing with AWS Lambda and Amazon API Gateway, let me introduce these great services to you.  AWS Lambda lets you run code without provisioning or managing servers, making it a serverless compute service that is event-driven and allows developers to bring their functions to the cloud easily for virtually any type of application or backend.  Amazon API Gateway helps you quickly build highly scalable, secure, and robust APIs at scale and provides the ability to maintain and monitor created APIs.

With the momentum of serverless in 2016, of course, the year had to end with a bang as the AWS team launched some powerful service features at re:Invent to make it even easier to build serverless solutions.  These features include:

Since Jeff has already introduced most of the aforementioned new service features for building distributed applications and microservices like Step Functions, let’s walk-through the last four new features not yet discussed using a common serverless use case example: Real-time Stream Processing.  In our walk-through of the stream processing use case, we will implement a Dead Letter Queue for notifications of errors that may come from the Lambda function processing a stream of data, we will take an existing Lambda function written in Node.js to process the stream and rewrite it using the C# language.  We then will build an example of the monetization of a Lambda backed API using API Gateway’s integration with AWS Marketplace.  This will be exciting, so let’s get started.

During the AWS Developer Days in San Francisco and Austin, I presented an example of leveraging AWS Lambda for real-time stream processing by building a demo showcasing a streaming solution with Twitter Streaming APIs. I will build upon this example to demonstrate the power of Dead Letter Queues (DLQ), C# Support, API Gateway Monetization features, and the open source template for API Gateway Developer Portal.  In the demo, a console or web application streams tweets gathered from the Twitter Streaming API that has the keywords ‘awscloud’ and/or ‘serverless’.  Those tweets are sent real-time to Amazon Kinesis Streams where Lambda detects the new records and processes the stream batch by writing the tweets to the NoSQL database, Amazon DynamoDB.

Now that we understand the real-time streaming process demo’s workflow, let’s take a deeper look at the Lambda function that processes the batch records from Kinesis.  First, you will notice below that the Lambda function, DevDayStreamProcessor, has an event source or trigger that is a Kinesis stream named DevDay2016Stream with a Batch size of 100.  Our Lambda function will poll the stream periodically for new records and automatically read and process batches of records, in this case, the tweets detected on the stream.

Now we will examine our Lambda function code which is written in Node.js 4.3. The section of the Lambda function shown below loops through the batch of tweet records from our Kinesis stream, parses each record, and writes desired tweet information into an array of JSON data. The array of the JSON tweet items is passed to the function, ddbItemsWrite which is outside of our Lambda handler.

'use strict';
console.log('Loading function');

var timestamp;
var twitterID;
var tweetData;
var ddbParams;
var itemNum = 0;
var dataItemsBatch = [];
var dbBatch = [];
var AWS = require('aws-sdk');
var ddbTable = 'TwitterStream';
var dynamoDBClient = new AWS.DynamoDB.DocumentClient();

exports.handler = (event, context, callback) => {
    var counter = 0; 
    
    event.Records.forEach((record) => {
        // Kinesis data is base64 encoded so decode here
        console.log("Base 64 record: " + JSON.stringify(record, null, 2));
        const payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
        console.log('Decoded payload:', payload);
        
        var data = payload.replace(/[\u0000-\u0019]+/g," "); 
        try
        {  tweetData = JSON.parse(data);   }
        catch(err)
        {  callback(err, err.stack);   }
        
        timestamp = "" + new Date().getTime();
        twitterID = tweetData.id.toString();
        itemNum = itemNum+1;
               
         var ddbItem = {
                PutRequest: { 
                    Item: { 
                        TwitterID: twitterID,
                        TwitterUser: tweetData.username.toString(),
                        TwitterUserPic: tweetData.pic,
                        TwitterTime: new Date(tweetData.time.replace(/( \+)/, ' UTC$1')).toLocaleString(), 
                        Tweet: tweetData.text,
                        TweetTopic: tweetData.topic,
                        Tags: (tweetData.hashtags) ? tweetData.hashtags : " ",
                        Location: (tweetData.loc) ? tweetData.loc : " ",
                        Country: (tweetData.country) ? tweetData.country : " ",
                        TimeStamp: timestamp,
                        RecordNum: itemNum
                    }
                }
            };
            
            dataItemsBatch.push(ddbItem);
            counter++;
});
    
    var twitterItems = {}; 
    twitterItems[ddbTable] = dataItemsBatch; 
    ddbItemsWrite(twitterItems, 0, context, callback); 

};

The ddbItemsWrite function shown below will take the array of JSON tweet records processed from the Kinesis stream, and write the records multiple items at a time to our DynamoDB table using batch operations. This function leverages the DynamoDB best practice of retrying unprocessed items by implementing an exponential backoff algorithm to prevent write request failures due to throttling on the individual tables.

 function ddbItemsWrite(items, retries, ddbContext, ddbCallback) 
    { 
        dynamoDBClient.batchWrite({ RequestItems: items }, function(err, data) 
            { 
                if (err) 
                { 
                    console.log('DDB call failed: ' + err, err.stack); 
                    ddbCallback(err, err.stack); 
                } 
                else 
                { 
                    if(Object.keys(data.UnprocessedItems).length) 
                    { 
                        console.log('Unprocessed items remain, retrying.'); 
                        var delay = Math.min(Math.pow(2, retries) * 100, ddbContext.getRemainingTimeInMillis() - 200); 
                        setTimeout(function() {ddbItemsWrite(data.UnprocessedItems, retries + 1, ddbContext, ddbCallback)}, delay); 
                    } 
                    else 
                    { 
                         ddbCallback(null, "Success");
                         console.log("Completed Successfully");
                    } 
                } 
            } 
        );
    }

Currently, this Lambda function works as expected and will successfully process tweets captured in Kinesis from the Twitter Streaming API, however, this function has a flaw that will cause an error to occur when processing batch write requests to our DynamoDB table.  In the Lambda function, the current code does not take into account that the DynamoDB batchWrite function should be comprised of no more than 25 write (put) requests per single call to this function up to 16 MB of data. Therefore, without changing the code appropriately to have the ddbItemsWrite function to handle batches of 25 or have the handler function put items in the array in groups of 25 requests before sending to the ddbItemsWrite function; there will be a validation exception thrown when the batch of tweets items sent is greater than 25.  This is a great example of a bug that is not easily detected in small-scale testing scenarios yet will cause failures under production load.

 

Dead Letter Queues

Now that we are aware of an event that will cause the ddbItemsWrite Lambda function to throw an exception and/or an event that will fail while processing records, we have a first-rate scenario for leveraging Dead Letter Queues (DLQ).

Since AWS Lambda DLQ functionality is only available for asynchronous event sources like Amazon S3, Amazon SNS, AWS IoT or direct asynchronous invocations, and not for streaming event sources such as Amazon Kinesis or Amazon DynamoDB streams; our first step is to break this Lambda function into two functions.  The first Lambda function will handle the processing of the Kinesis stream, and the second Lambda function will take the data processed by the first function and write the tweet information to DynamoDB.  We will then setup our DLQ on the second Lambda function for the error that will occur on writing the batch of tweets to DynamoDB as noted above.

We have two options when setting up a target for our DLQ; Amazon SNS topic or an Amazon SQS queue.  In this walk-through, we will opt for using an Amazon SQS queue.  Therefore, my first step in using DLQ is to create a SQS Standard queue.  A Standard queue type is a queue which has high transactions throughput, a message will be delivered at least once, but another copy of the message may also be delivered, and it is possible that messages might be delivered in an order different from which they were sent.  You can learn more about creating SQS queues and queue type in the Amazon SQS documentation.

Once my queue, StreamDemoDLQ, is created, I will grab the ARN from the Details tab of this selected queue. If I am not using the console to designate the DLQ resource for this function, I will need the ARN for the queue for my Lambda function to identify this SQS queue as the DLQ target for error and event failure notifications. Additionally, I will use the ARN to add permissions to my Lambda execution role policy in order to access this SQS queue.

I will now return to my Lambda function and select the Configuration tab and expand the Advanced settings section. I will select SQS in the DLQ Resource field and select my StreamDemoDLQ queue in the SQS Queue field dropdown.

Remember, the execution role for the Lambda function must explicitly provide sqs:SendMessage access permissions to in order to successfully send messages to your SQS DLQ.  Therefore, I ensured that my Lambda role, lambda_kinesis_role, has the following IAM policy for SQS permissions.


We have now successfully configured a Dead Letter Queue for our Lambda function using Amazon SQS. To learn more about Dead Letter Queues in Lambda, read the Troubleshooting and Monitoring section of the AWS Lambda Developer Guide and check out the AWS Compute Blog post on Dead Letter Queues.

 

C# Support

As I mentioned earlier, another very exciting feature added to Lambda during AWS re:Invent was the support for the C# language via the open source .NET Core 1.0 platform.  Since the Lambda console does not offer editing for compiled languages yet, in order to author a C# Lambda function you can use tooling in Visual Studio with the AWS Toolkit, Yeoman, and/or the .NET CLI.  To deploy Lambda functions written in C#, you can use the Lambda plugin in the AWS ToolKit for Visual Studio or create a deployment package with the .NET Core command line.

A C# Lambda function handler should be defined as an instance or static method in a class. There are two handler function parameters; the first is the input type which is the event data and second is the Lambda context object of type ILambdaContext. The event data input object types for AWS Services include the following:

  • Amazon.Lambda.APIGatewayEvents
  • Amazon.Lambda.CognitoEvents
  • Amazon.Lambda.ConfigEvents
  • Amazon.Lambda.DynamoDBEvents
  • Amazon.Lambda.KinesisEvents
  • Amazon.Lambda.S3Events
  • Amazon.Lambda.SNSEvents

Now that we have discussed more detail around C# Support in Lambda, let’s rewrite our DevDayStreamProcessor lambda function with the C# language. For this example, I will use Visual Studio IDE to write the Lambda function, and additionally take advantage of the AWS Lambda Visual Studio plugin to deploy the function. Remember in order to use the AWS Toolkit for Visual Studio with Lambda, you will need to have Visual Studio 2015 Update 3 version and NET Core tools. You can read more about installing Visual Studio 2015 Update 3 and .NET Core here.

To create the C# function using Visual Studio, I start a New Project, select AWS Lambda Project (.NET Core) and name it ServerlessStreamProcessor.

What’s really cool about taking advantage of the AWS Toolkit for Visual Studio to author this function, is that inside of Visual Studio I can use Lambda blueprints to get started in a similar way that I would in using the Lambda console.  Therefore in order to replicate the DevDayStreamProcessor in C#, I will select the Simple Kinesis Function blueprint.

It should be noted that when writing Lambda functions in C#, there is no need to mark the class declaration nor the target handler function as a Lambda function. Additionally, when writing CloudWatch logs you can use the standard C# Console class WriteLine function or use the ILambdaContext LogLine function found as a part of the ILambdaContext interface. With the template for accessing the Kinesis stream in place, I finish writing the C# Lambda function, ServerlessStreamProcessor, utilizing the same variable names as in the Node.js code in DevDayStreamProcessor. Please note the C# Lambda handler function below.

using System.Collections.Generic;
using Amazon.Lambda.Core;
using Amazon.Lambda.KinesisEvents;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Newtonsoft.Json.Linq;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializerAttribute(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace ServerlessStreamProcessor
{
    public class LambdaTwitterStream
    {
        string twitterID, timeStamp;
        int itemNum = 0;
        
        private static AmazonDynamoDBClient dynamoDBClient = new AmazonDynamoDBClient();
        List<TwitterItem> dataItemsBatch = new List<TwitterItem>();
        
        public void FunctionHandler(KinesisEvent kinesisEvent, ILambdaContext context)
        {
            DynamoDBContext dbContext = new DynamoDBContext(dynamoDBClient);
            context.Logger.LogLine($"Beginning to process {kinesisEvent.Records.Count} records...");
            
            foreach (var record in kinesisEvent.Records)
            {
                context.Logger.LogLine($"Event ID: {record.EventId}");
                context.Logger.LogLine($"Event Name: {record.EventName}");

                // Kinesis data is base64 encoded so decode here
                string tweetData = GetRecordContents(record.Kinesis);
                context.Logger.LogLine($"Decoded Payload: {tweetData}");
                tweetData = @"" + tweetData;
                JObject twitterObj = JObject.Parse(tweetData);
                
                twitterID = twitterObj["id"].ToString();
                timeStamp = DateTime.Now.Millisecond.ToString();
                itemNum++;
                context.Logger.LogLine(timeStamp);
                context.Logger.LogLine($"Twitter ID is: {twitterID}");
                context.Logger.LogLine(itemNum.ToString());

                TwitterItem ddbItem = new TwitterItem()
                { 
                    TwitterID = twitterID,
                    TwitterUser = twitterObj["username"].ToString(),
                    TwitterUserPic = twitterObj["pic"].ToString(),
                    TwitterTime = DateTime.Parse(twitterObj["time"].ToString()).ToUniversalTime().ToString(),
                    Tweet = twitterObj["text"].ToString(),
                    TweetTopic = twitterObj["topic"].ToString(),
                    Tags = twitterObj["hashtags"] != null ? twitterObj["hashtags"].ToString() : String.Empty,
                    Location = twitterObj["loc"] != null ? twitterObj["loc"].ToString() : String.Empty,
                    Country = twitterObj["country"] != null ? twitterObj["country"].ToString() : String.Empty,
                    TimeStamp =  timeStamp,
                    RecordNum = itemNum
                };
                
                dataItemsBatch.Add(ddbItem);
            }

            context.Logger.LogLine(JObject.FromObject(dataItemsBatch).ToString());
            ddbItemsWrite(dataItemsBatch, 0, dbContext, context);
            context.Logger.LogLine("Success - Completed Successfully");
            context.Logger.LogLine("Stream processing complete.");
        }

There are only a few differences that should be noted between our Kinesis stream processor written in C# and our original Node.js code.  Since the input parameter type supported by default in C# Lambda functions is the System.IO.Stream type, the Kinesis base64 string is decoded by using a StreamReader with ASCII encoding in a blueprint provided function, GetRecordContents.

 

private string GetRecordContents(KinesisEvent.Record streamRecord)
{
    using (var reader = new StreamReader(streamRecord.Data, Encoding.ASCII))
    {
        return reader.ReadToEnd();
    }
}

The other thing to note is that in order to write the tweet data to the DynamoDB Table, I added the AWS .NET SDK NuGet package for DynamoDB; AWSSDK.DynamoDBv2 to the Lambda function project via the NuGet package manager within Visual Studio.  I also created a .NET data object, TwitterItem, to map to the data being stored in the DynamoDB table. Using the AWS .NET SDK higher level programming interface, object persistence model for DynamoDB, I created a collection of TwitterItem objects to be written via the BatchWrite object class in our ddbItemsWrite C# function.

private async void ddbItemsWrite(List<TwitterItem> items, int retries, DynamoDBContext ddbContext, ILambdaContext context)
{
BatchWrite<TwitterItem> twitterStreamBatchWrite = ddbContext.CreateBatchWrite<TwitterItem>();
        
        try
        {
            twitterStreamBatchWrite.AddPutItems(items);   
            await twitterStreamBatchWrite.ExecuteAsync();
        }
        catch (Exception ex)
        {
            context.Logger.LogLine($"DDB call failed: {ex.Source} ");
            context.Logger.LogLine($"Exception: {ex.Message}");
            context.Logger.LogLine($"Exception Stacktrace: {ex.StackTrace}");
        }      
}

Another benefit of using AWS Toolkit for Visual Studio to author my C# Lambda function is that I can deploy my Lambda function directly to AWS with a single click.  Selecting my project name in the Solution Explorer and performing a right-click, I get a menu option, Publish to AWS Lambda, which brings up a menu for information to include about my Lambda function for deployment to AWS.

It is important to note that the handler function signature follows the nomenclature of Assembly :: Namespace :: ClassName :: Method, therefore, the signature of our C# Lambda function shown here is: ServerlessStreamProcessor :: ServerlessStreamProcessor.LambdaTwitterStream :: FunctionHandler.  We provide this information to the Upload to AWS Lambda dialog box and select Next to assign a role for the function.

Upon completion, you can test in the Lambda console or in Visual Studio with AWS toolkit provided plugin (shown below) using the sample data of the triggering event source for an iterative approach to developing the Lambda function.

You can learn more about authoring AWS Lambda functions using the C# Language in the AWS Lambda developer guide or by reading the post announcing C# Support on the Compute Blog.

 

API Gateway Monetization and Developer Portal

If you have been following the microservices momentum, you may be aware of an architectural pattern that calls for using smart endpoints and/or using an API gateway via REST APIs to manage access and exposure of individual services that make up a microservices solution.  Amazon API Gateway enables creation and management of RESTful APIs to expose AWS Lambda functions, external HTTP endpoints, as well as, other AWS services.  In addition, Amazon API Gateway allows clients and external developers to have access to a deployed APIs by via HTTP protocol or a platform/language targeted SDK.

With the introduction of SaaS Subscriptions on AWS Marketplace and the API Gateway integration with the AWS Marketplace, you can now monetize your APIs by allowing customers to directly consume the APIs you create with API Gateway in the AWS Marketplace.  AWS customers can subscribe and be billed for the APIs published on the marketplace with their existing AWS account.  With the integration of API Gateway with the AWS Marketplace, the process to get started is easy on the AWS Marketplace.

To get started, you must ensure that you have enabled the Usage Plan feature in Amazon API Gateway.

Once enabled the next step is to create a Usage Plan, enable throttling (if desired) with targeted rate and burst request thresholds, and finally enable quotas (if you choose) by providing targeted request quota per a set timeframe.

Next, we would choose our APIs and related stage(s) that we wish to be associated with the usage plan. Please note that this is an optional step as you can opt not associate a specific API with your usage plan.

All that is left to do is add or create an API key for the usage plan.  Again, it should be noted that this is also an optional step in creating your usage plan.

Now that we have our usage plan, StreamingPlan, we are ready for the next step in preparation for selling our API on the marketplace. You have the option to create multiple usage plans with varying APIs and limits, and sell these plans as differentiated API products on AWS Marketplace.

In order to enable customers to buy our new API product, however, the AWS Marketplace requires that each API product has an external developer portal to handle subscription requests, provide API information details and ability for the management of usage.

This customer need for an external developer portal for the marketplace birthed the new open source API Gateway developer portal serverless web application implementation.  The goal of the API Gateway developer portal project was to allow customers to follow a few easy steps to create a serverless web application that lists a catalog of your APIs built with API Gateway while allowing for developer signups.

The API Gateway developer portal was built upon AWS Serverless Express; an open source library published by AWS which aids you in utilizing AWS Lambda and Amazon API Gateway in building web applications/services with the Node.js Express framework.  Additionally, the API Gateway developer portal application uses an AWS SAM (Serverless Application Model) template to deploy its serverless resources.  AWS SAM is a simplified CloudFormation template and specification that allows easier management and deployment of serverless applications on AWS.

To build your developer portal using the API Gateway portal, you would start by cloning the aws-api-gateway-developer-portal project from GitHub.

Assuming you have the latest version of the AWS CLI and Node.js installed, you would setup the developer portal by running “npm run setup” on the command line for Mac and Linux OS users. For Windows users, you would run “npm run win-setup” on the command line setup the developer portal.

The result is a functional sample developer portal website running on S3 that you can customize in order to create your own developer portal for your APIs.

The frontend of the sample developer portal website is built with the React JavaScript library, and the backend is an AWS Lambda function running using the aws-serverless-express library. Additionally, a Lambda function with a SNS event source was created as a listener for notification when customers subscribe or unsubscribe to your API via the AWS Marketplace console.  You can learn more about the steps to build, customize, and deploy your API Gateway developer portal web application with this reference project by visiting the AWS Compute blog post which discusses the architecture and implementation in more detail.

 

The next key step in monetizing our API is establishing an account on the AWS Marketplace.  If an account is not already established, registering is simply verifying that you meet the requirement prerequisites provided in the AWS Marketplace Seller Guide and completing a seller registration form on the AWS Marketplace Management Portal.  You can see a snapshot of the start of the seller registration form below.

To list the API, you would fill a product load form describing the API, establish the pricing for the API, and provide t\he IDs of AWS Accounts that will test the API subscription process.  Completing this form would also require you to submit the URL for your API developer portal.

When your seller registration is complete, you will be supplied an AWS Marketplace product code.  You will need to associate your marketplace product code with your API usage plan.  In order to complete this step, you would simply log into the API Gateway console and go to your API usage plan. Go to the Marketplace tab and enter your product code. This tells API Gateway to send measurement data to AWS Marketplace when your API is used.

With your Amazon API Gateway managed API packaged into a usage plan, the accompanying API developer portal created, seller account registration completed, and product code associated with API usage plan; we are now ready to monetize our API on the AWS Marketplace.

Learn more about monetizing your APIs created with API Gateway by checking out the related blog post and reviewing the API Gateway developer guide documentation.

Summary

As you can see, the AWS teams were busy in 2016 working to make the customer experience easier for creating and deploying serverless architectures, as well as, providing mechanisms for customers to generate and monetize their API Gateway managed APIs.

Visit the product documentation for AWS Lambda and Amazon API Gateway to learn more about these services and all the newly released features.

Tara

AWS re:Start – Training and Job Placement in the UK

by Jeff Barr | on | in Announcements, Training and Certification | | Comments

As a follow-on to the recent launch of the AWS Region in London, I am happy to be able to tell you about a new UK-centric training and job placement program that we call AWS re:Start. This program is designed to educate young adults, military veterans, members of the military reserve, those leaving the Armed Forces, and service spouses on the latest software development and cloud computing technologies.

We’re working closely with QA Consulting (an APN Training Partner), The Prince’s Trust, and the Ministry of Defence (MoD). In conjunction with members of AWS Partner Network (APN) and customers, work placements will be offered to 1,000 people as part of this program.

AWS re:Start is designed to accommodate participants at all levels of experience – even those with no previous technical knowledge can sign up. Participants who join AWS re:Start will complete technical training classes, led by AWS certified instructors, and will gain experience through on-the-job training. They will also learn about about multi-tier architectures, application programming interfaces (APIs), and microservices, giving them the knowledge and skills needed to help businesses to build secure, elastically scalable, and highly-reliable applications in the cloud. Training content for the AWS re:Start program will be curated by AWS in collaboration with QA Consulting, who will also deliver the training courses.

Organizations that have pledged job placements to AWS re:Start include Annalect, ARM, Claranet, Cloudreach, Direct Line Group, EDF Energy, Funding Circle, KCOM, Sage, Tesco Bank, and Zopa. Participants completing the program can expect to be eligible for many different technical positions within these companies, including highly sought-after entry level positions such as such as first line help desk support, IT support analyst, software developer, IT support technician, network engineer, IT recruitment consultant, and IT sales roles. They will also have the fundamental knowledge needed to immediately start working with AWS and building their own technology start-up business. To learn more about this aspect of the program, read AWS re:Start for Employers.

AWS re:Start for the Military
AWS re:Start training and work placements for the Armed Forces, including reservists, veterans, service leavers, and service spouses will be delivered through the Ministry of Defence and the  Career Transition Partnership (CTP).  AWS is also proud to be signing the Armed Forces Covenant, which establishes how businesses support members of the UK Armed Forces community and guards against discrimination returning service men and women may face when entering the civilian workforce.

AWS re:Start for Young Adults
The AWS re:Start program will be delivered to young adults through The Prince’s Trust Get into Technology program. The Prince’s Trust is a youth charity that helps young people aged 13 to 30 find jobs, education, and training to help them succeed. In addition to technical training, the ‘Get into Technology’ program will support students with mentoring, soft work skills, and help in applying for jobs including resume writing and interview skills.

Learn More / Apply Now
The first intake of participants for AWS re:Start is scheduled for March 27, 2017. Those who complete the AWS re:Start program will be eligible to apply for further training courses offered by QA Consulting to prepare them to take the AWS Associate Level Certification exam and other certifications. Visit the AWS re:Start site to learn more.

Jeff;

Reduce DDoS Risks Using Amazon Route 53 and AWS Shield

by Jeff Barr | on | in AWS Shield, Route 53 | | Comments

In late October of 2016 a large-scale cyber attack consisting of multiple denial of service attacks targeted a well-known DNS provider. The attack, consisting of a flood of DNS lookups from tens of millions of IP addresses, made many Internet sites and services unavailable to users in North America and Europe. This Distributed Denial of Service (DDoS) attack was believe to have been executed using a botnet consisting of a multitude of Internet-connected devices such as printers, camera, residential network gateways, and even baby monitors. These devices had been infected with the Mirai malware and generated several hundreds of gigabytes of traffic per second. Many corporate and educational networks simply do not have the capacity to absorb a volumetric attack of this size.

In the wake of this attack and others that have preceded it, our customers have been asking us for recommendations and best practices that will allow them to build systems that are more resilient to various types of DDoS attacks. The short-form answer involves a combination of scale, fault tolerance, and mitigation (the AWS Best Practices for DDoS Resiliency white paper goes in to far more detail) and makes use of Amazon Route 53 and AWS Shield (read AWS Shield – Protect Your Applications from DDoS Attacks to learn more).

Scale – Route 53 is hosted at numerous AWS edge locations, creating a global surface area capable of absorbing large amounts of DNS traffic. Other edge-based services, including Amazon CloudFront and AWS WAF, also have a global surface area and are also able to handle large amounts of traffic.

Fault Tolerance – Each edge location has many connections to the Internet. This allows for diverse paths and helps to isolate and contain faults. Route 53 also uses shuffle sharding and anycast striping to increase availability. With shuffle sharding, each name server in your delegation set corresponds to a unique set of edge locations. This arrangement increases fault tolerance and minimizes overlap between AWS customers. If one name server in the delegation set is not available, the client system or application will simply retry and receive a response from a name server at a different edge location. Anycast striping is used to direct DNS requests to an optimal location. This has the effect of spreading load and reducing DNS latency.

Mitigation – AWS Shield Standard protects you from 96% of today’s most common attacks. This includes SYN/ACK floods, Reflection attacks, and HTTP slow reads. As I noted in my post above, this protection is applied automatically and transparently to your Elastic Load Balancers, CloudFront distributions, and Route 53 resources at no extra cost. Protection (including deterministic packet filtering and priority based traffic shaping) is deployed to all AWS edge locations and inspects all traffic with just microseconds of overhead, all in a totally transparent fashion. AWS Shield Advanced includes additional DDoS mitigation capability, 24×7 access to our DDoS Response Team, real time metrics and reports, and DDoS cost protection.

To learn more, read the DDoS Resiliency white paper and learn about Route 53 anycast.

Jeff;

 

New – AWS OpsWorks for Chef Automate

by Jeff Barr | on | in AWS OpsWorks, AWS re:Invent | | Comments

AWS OpsWorks helps you to configure and run applications using Chef. You use a Domain Specific Language (DSL) to write cookbooks that define your application’s architecture and the configuration of each component. The Chef server is an essential part of the configuration process. It stores all of the cookbooks and tracks state information for each of the instances (nodes in Chef terminology).

Because the Chef server is in the critical path when newly launched instances are configured, it must be reliable. Many OpsWorks and Chef users install and maintain this important architectural component themselves. In production-scale environments, this leaves them to handle backups, restores, version upgrades, and so forth.

New AWS OpsWorks for Chef Automate
Early this month we launched AWS OpsWorks for Chef Automate from the AWS re:Invent stage. You can launch the Chef Automate server with just 3 clicks and start using it within minutes. You can use community cookbooks from Chef Supermarket and community tools such as Test Kitchen and Knife.

You can use Chef Automate to manage your infrastructure throughout the life-cycle of your application’s infrastructure. For example, newly launched EC2 instances can automatically connect to the Chef server and run a specified recipe by using an unattended association script (read Adding Nodes Automatically in AWS OpsWorks for Chef Automate to learn more). The registration script can be used to register EC2 instances created dynamically through an Auto Scaling Group and to register on-premises servers.

Take a Look
Let’s launch a Chef Automate server from the OpsWorks Console. Click on Go to OpsWorks for Chef Automate to get started.

Click on Create Chef Automate server, give your server a name, choose a region, and select a suitable EC2 instance type:

Choose one of your SSH key pairs, or opt out of SSH:

Finally, configure your network (VPC), IAM, maintenance window, and backup settings:

Click on Next, review your settings, and then click on Launch! The launch process takes less than 20 minutes. During that time you can download the sign-in credentials for your Chef Automate dashboard along with a Starter Kit:

You can see all of your Chef Automate servers at a glance:

Click on the server name (BorkBorkBork here), and then on Open Chef Automate dashboard, then enter your credentials to log in:

And here’s the dashboard:

You can see and manage your nodes:

Manage your workflows:

And much more!

Behind the scenes, the launch process invokes a AWS CloudFormation template. The template creates an EC2 instance, an Elastic IP Address, and a Security Group.

Available Now
You can launch AWS OpsWorks for Chef Automate today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions. Pricing is based on the number of nodes and the number of hours that they are connected to the server; see the Chef Automate Pricing page for more info. As part of the AWS Free Tier, you can use up to 10 nodes at no charge for 12 months.

Jeff;

New – Daily Aggregated Price List notifications

by Tara Walker | on | | Comments

Last year, AWS customers and partners were asking for a systematic way to retrieve prices for AWS services.  We answered the need by launching the AWS Price List API last December with pricing for thirteen (13) AWS services. The API provides pricing data in JSON and CSV format for download and enables customers to query for the prices of AWS services.  The Price List API also allows customers to receive price change updates via Amazon SNS notifications.

Expanding the AWS Price List API

Over last three months we have expanded the AWS Price List API service to provide pricing data for all of the AWS services.  Now customers that are doing cost analysis around building cloud-based solutions or moving on-premises workloads to the cloud have easier access to the comprehensive price list of AWS services. This will enable customer and partners to have greater control over the budgeting, forecasting and planning of their cloud solutions.
In addition to the expansion of the AWS Price List API, customers can sign up to receive notification around price cuts, new services and instance types. Upon subscribing to receive notifications, you can customize the timing of the receipt of the updates to be once a day or every time a price update occurs. If you choose to be notified once a day, the SNS notification will include all price changes applied during that day. Here’s a sample of an email notification that would be received upon subscribing to the price list API:

Let’s do a quick review of how to use the Price List API. First to access pricing information via the Price List API, you would download two types of files: Offer index file and Offer file. The Offer index file is available as a JSON file and lists the supported AWS services and the URL for the associated service Offer file. The Offer file lists the products and prices for a single AWS service and is available as either CSV or JSON file format. Both can be accessed via simple download links for an easy method to obtain the pricing data files.

You can get started today by using the price list API to obtain pricing data for any of the AWS services. With the expansion of the AWS Price List API to include all of the AWS Services, subscribing to the price list API is a great way to keep up to date on the latest AWS cuts prices and get introductions to new services.

To learn more about the AWS Price List API and get to more information on how to subscribe to API notifications, check out the previous AWS Blog post introducing the service and the Using the AWS Price List API documentation.

– Tara

Look Before You Leap – December 31, 2016 Leap Second on AWS

by Jeff Barr | on | in Amazon EC2, Amazon RDS, Announcements | | Comments

If you are counting down the seconds before 2016 is history, be sure to add one at the very end!

The next leap second (the 27th so far) will be inserted on December 31, 2016 at 23:59:60 UTC. This will keep Earth time (Coordinated Universal Time) close to mean solar time and means that the last minute of the year will have 61 seconds.

The information in our last post (Look Before You Leap – The Coming Leap Second and AWS), still applies, with a few nuances and new developments:

AWS Adjusted Time – We will spread the extra second over the 24 hours surrounding the leap second (11:59:59 on December 31, 2016 to 12:00:00 on January 1, 2017). AWS Adjusted Time and Coordinated Universal time will be in sync at the end of this time period.

Microsoft Windows – Instances that are running Microsoft Windows AMIs supplied by Amazon will follow AWS Adjusted Time.

Amazon RDS – The majority of Amazon RDS database instances will show “23:59:59” twice. Oracle versions 11.2.0.2, 11.2.0.3, and 12.1.0.1 will follow AWS Adjusted Time. For Oracle versions 11.2.0.4 and 12.1.0.2 contact AWS Support for more information.

Need Help?
If you have any questions about this upcoming event, please contact AWS Support or post in the EC2 Forum.

Jeff;

 

EC2 Systems Manager – Configure & Manage EC2 and On-Premises Systems

by Jeff Barr | on | in Amazon EC2, EC2 Systems Manager, Windows | | Comments

Last year I introduced you to the EC2 Run Command and showed you how to use it to do remote instance management at scale, first for EC2 instances and then in hybrid and cross-cloud environments. Along the way we added support for Linux instances, making EC2 Run Command a widely applicable and incredibly useful administration tool.

Welcome to the Family
Werner announced the EC2 Systems Manager at AWS re:Invent and I’m finally getting around to telling you about it!

This a new management service that include an enhanced version of EC2 Run Command along with eight other equally useful functions. Like EC2 Run Command it supports hybrid and cross-cloud environments composed of instances and services running Windows and Linux. You simply open up the AWS Management Console, select the instances that you want to manage, and define the tasks that you want to perform (API and CLI access is also available).

Here’s an overview of the improvements and new features:

Run Command – Now allows you to control the velocity of command executions, and to stop issuing commands if the error rate grows too high.

State Manager – Maintains a defined system configuration via policies that are applied at regular intervals.

Parameter Store – Provides centralized (and optionally encrypted) storage for license keys, passwords, user lists, and other values.

Maintenance Window -Specify a time window for installation of updates and other system maintenance.

Software Inventory – Gathers a detailed software and configuration inventory (with user-defined additions) from each instance.

AWS Config Integration – In conjunction with the new software inventory feature, AWS Config can record software inventory changes to your instances.

Patch Management – Simplify and automate the patching process for your instances.

Automation – Simplify AMI building and other recurring AMI-related tasks.

Let’s take a look at each one…

Run Command Improvements
You can now control the number of concurrent command executions. This can be useful in situations where the command references a shared, limited resource such as an internal update or patch server and you want to avoid overloading it with too many requests.

This feature is currently accessible from the CLI and from the API. Here’s a CLI example that limits the number of concurrent executions to 2:

$ aws ssm send-command \
  --instance-ids "i-023c301591e6651ea" "i-03cf0fc05ec82a30b" "i-09e4ed09e540caca0" "i-0f6d1fe27dc064099" \
  --document-name "AWS-RunShellScript" \
  --comment "Run a shell script or specify the commands to run." \
  --parameters commands="date" \
  --timeout-seconds 600 --output-s3-bucket-name "jbarr-data" \
  --region us-east-1 --max-concurrency 2

Here’s a more interesting variant that is driven by tags and tag values by specifying --targets instead of --instance-ids:

$ aws ssm send-command \
  --targets "Key=tag:Mode,Values=Production" ... 

You  can also stop issuing commands if they are returning errors, with the option to specify either a maximum number of errors or a failure rate:

$ aws ssm send-command --max-errors 5 ... 
$ aws ssm send-command --max-errors 5% ...

State Manager
State Manager helps to keep your instances in a defined state, as defined by a document. You create the document, associate it with a set of target instances, and then create an association to specify when and how often the document should be applied. Here’s a document that updates the message of the day file:

And here’s the association (this one uses tags so that it applies to current instances and to others that are launched later and are tagged in the same way):

Specifying targets using tags makes the association future-proof, and allows it to work as expected in dynamic, auto-scaled environments. I can see all of my associations, and I can run the new one by selecting it and clicking on Apply Association Now:

Parameter Store
This feature simplifies storage and management for license keys, passwords, and other data that you want to distribute  to your instances. Each parameter has a type (string, string list, or secure string), and can be stored in encrypted form. Here’s how I create a parameter:

And here’s how I reference the parameter in a command:

Maintenance Window
This feature allows specification of a time window for installation of updates and other system maintenance. Here’s how I create a weekly time window that opens for four hours every Saturday:

After I create the window I need to assign a set of instances to it. I can do this by instance Id or by tag:

And  then I need to register a task to perform during the maintenance window. For example, I can run a Linux shell script:

Software Inventory
This feature collects information about software and settings for a set of instances. To access it, I click on Managed Instances and Setup Inventory:

Setting up the inventory creates an association between an AWS-owned document and a set of instances. I simply choose the targets, set the schedule, and identify the types of items to be inventoried, then click on Setup Inventory:

After the inventory runs, I can select an instance and then click on the Inventory tab in order to inspect the results:

The results can be filtered for further analysis. For example, I can narrow down the list of AWS Components to show only development tools and libraries:

I can also run inventory-powered queries across all of the managed instances. Here’s how I can find Windows Server 2012 R2 instances that are running a version of .NET older than 4.6:

AWS Config Integration
The results of the inventory can be routed to AWS Config  and allow you to track changes to the applications, AWS components, instance information, network configuration, and Windows Updates over time. To access this information, I click on Managed instance information above the Config timeline for the instance:

The three lines at the bottom lead to the inventory information. Here’s the network configuration:

Patch Management
This feature helps you to keep the operating system on your Windows instances up to date. Patches are applied during maintenance windows that you define, and are done with respect to a baseline. The baseline specifies rules for automatic approval of patches based on classification and severity, along with an explicit list of patches to approve or reject.

Here’s my baseline:

Each baseline can apply to one or more patch groups. Instances within a patch group have a Patch Group tag. I named my group Win2016:

Then I associated the value with the baseline:

The next step is to arrange to apply the patches during a maintenance window using the AWS-ApplyPatchBaseline document:

I can return to the list of Managed Instances and use a pair of filters to find out which instances are in need of patches:

Automation
Last but definitely not least, the Automation feature simplifies common AMI-building and updating tasks. For example, you can build a fresh Amazon Linux AMI each month using the AWS-UpdateLinuxAmi document:

Here’s what happens when this automation is run:

Available Now
All of the EC2 Systems Manager features and functions that I described above are available now and you can start using them today at no charge. You pay only for the resources that you manage.

Jeff;