Azure vs. AWS Text to Blob with SDKs

This demonstrates what is involved in writing and reading some text to an Azure and an AWS blob.

Use case

What i set out to achieve was to demonstrate how to read and write some text to a blob with the SDKs. Just to make it a little more interesting I decided to use .NET for the reading and Java for the writing.

Obtaining the SDKs

Adding the SDKs was a seamless process, for .NET Nuget was used and for Java Maven was used

 

.Net Java
image
image
image

 

Write

Azure

image

 

AWS

image

 

Read

Azure

image

 

AWS

image

Conclusions

Both SDK’s were trivial to install and use, the Azure SDK’s suited my use case a little better in that they didn’t need me to deal with files in my Application code (I expect text is not a mainstream use case).

AWS as always relies on the region being specified which I can’t say I like that much.

Media Indexing In the Cloud

So out of the blue I found myself giving Azure Media Indexing a trial run, for no other reason other than I could, this is why I love cloud tech so much, it brings something that would have been very difficult 5-10 years ago, within reach of anyone with an public cloud account.

AWS vs Azure

Both AWS and Azure have media services, typical used to manage digital media and serve it up to consumer playback devices at scale.

AWS has the the Elastic Transcode and Azure has Azure Media Services, however only Azure has the ability to dig into audio or media files and extract the text within.

Azure Media Indexer

Azure Media Indexer enables you to make content of your media files searchable and to generate a full-text transcript for closed captioning and keywords. You can process one media file or multiple media files in a batch. Have a look at this post for some details on how to do it from code https://azure.microsoft.com/en-us/documentation/articles/media-services-index-content/

The code uploads a file then starts an indexing job, then downloads the results:
Note: The source code seen above has a typo, I’ve submitted a pull request to hopefully this will be fixed, but easy to spot.
Also the download part failed with an exception for me so i just pulled it down with a little bit of code on a second pass.

image

 

The above code is possibly all you need if you wish to upload content and start the indexing job manually with the old portal.

Here’s how:

On your media account upload some content

image
image

 

Once the content is uploaded, start the indexer process, set a good title as Azure will reach out to the interweb and use it to seed the language extraction.

image
image

 

There is no way to download the output from the portal so use use the code i shared above to download the content.

I processed the latest podcast at time of publishing from https://www.dotnetrocks.com/
https://s3.amazonaws.com/dnr/dotnetrocks_1276_news_from_build.mp3 

In hindsight it was possibly not the best podcast to index as it was recorded live @build (i expect, i’m two episodes behind on DNR this week so have not listened to it yet), the DNR guys typically have exceedingly good audio, at some stage it might be worth indexing another episode.

Results

You can find the results here here , initially my knee jerk reaction was , “ah this is poor” but after reflecting on it I’m blown away by what was done and so so, sooo easily!

With a bit of editing this can be thrown into Azure Search / Sql Server etc for full text search and direct seek media playback.

See for yourself:

image

 

For sure it needs some editing, e.g.
I release the eleventh music decode by
should in-fact be
I released the eleventh music to code by

but what a great start!!!

Cloud costs: Shut those VM’s down

The public cloud is fantastic for numerous reasons, if you’re not faced with some restriction such as where you data lives or other factors, then my advise is get away from private clouds and get to the public clouds as fast as your legs can carry you!

However once you’re there it’s not all plain sailing, if you let a team of people loose to play with with all these new toys, on the back of your company’s credit card, then costs can start to accumulate very quickly!

Sometimes, your VM’s are not being used for production and what invariably happens, is that these machines get forgotten about or are left running for no good reason, now while there are a few ways to capture such scenarios,  what I’ll show you now is a very quick way of scheduling those known VM’s to shutdown (or start up) as on a predefined schedule,

AWS

For AWS the easiest way of scheduling a single standalone VM to shutdown is to use the AWS Data Pipeline service.

image

Lets quickly show the workflow:

1) Create new Pipeline with CLI Command

image

 

2) Enter the Stop EC2 CLI commands

image

Note: This field only shows as one line of text vertically in chrome so I modified to styles to show the full command.


You can see that i have two different stop commands, I could combine these into the one command with the two IDs however if one fails then they both fail, this can be problematic if for example an Instance gets terminated.

3) Schedule

image

 

4) Set log file bucket

image

 

5) Select role

image

Choose custom and then select the two defaults.
Security Note:  Roles needs to be configured to allow Data Pipeline access to your VM’s, please see here: https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-ec2-instances/

6) Done

image

That’s it, you now have a scheduled task that will switch off your vm’s nightly. It should be noted that this will start a micro data pipeline ec2 instance VM with a default run time of 50 minutes, so you need to ensure the end justifies the means, better yet reduce the run time by editing the workflow to e.g. 15 minutes.

image

 

Azure

In order to achieve the same results with Azure we are going to select Azure automation,

image
If you’re familiar with Azure you will know that there are currently two ways of creating VM’s, the classic approach and the RM (resource manager approach). In this post I’ll show you the RM approach, but feel free to substitute classic in it’s place with a nearly identical approach.

1) Open or create an Azure Automation account.

image

 

2) Edit Assets

image[95]
image

Add a variable for the AzureSubscriptionId you’ll be using
Select your service principle account, you’ll have to search for it to appear.

3) Runbook

We have two options now, we can either use some powershell or some graphically defined workflows, let’s do this with a graphical version, we don’t need to create this, we simply import from the gallery.

image
image

After importing choose Edit on the runbook

image
image

4) Set inputs

image

Then we set the two Assets we provided earlier and optionally a ResourceGroupName (to stop all vm’s in a resource group) or a VMName The “Auto” you see above isn’t a keyword, it’s my badly named ResourceGroup.

5) Publish

image

 

6) Set schedule

Go back to the Runbook and choose schedule

image

With the schedule you can specify any of the input parameters and override the defaults if you so wish.

Security Note: Much the same as Azure you’ll need to ensure you’ve permission to access the VM’s from Azure Automation, the best option is to create a SecurityPrinciple application. See: https://azure.microsoft.com/en-us/documentation/articles/resource-group-create-service-principal-portal/

 

Conclusion:

While it does look like the Azure approach is much more convoluted it is much more powerful, e.g. it is very easy to extend the azure run book to check all VM’s for a “Production” tag and only shutdown vm’s if they are not production (because that would be bad right!); with AWS, we are simply relying on a feature of Data Pipeline that allows us to run simple cli commands.

Pricing is much of a much-ness between each, with Azure you can run for free (to a limit)

image

AWS the 15 mins with a micro instance is not even worth worrying about.

Web App deployment to AWS and Azure

As promised, hereby the first instalment of the AWS vs Azure blog post saga, again I’m trying to remain impartial throughout.

What I intend to outline is at this stage is the show to get started deploying a new application to AWS and to Azure from within Visual Studio. I’m sure there are those of you that are shouting, “.NET, Visual Studio, Azure? Of course Azure will do it better!!!” however rest assured this is only the first of a few posts related to Azure App Service and AWS elastic beanstalk and AWS doesn’t fair all that badly.

Sample Application

The sample application in this case is just a File/New ASP MVC5 project using .net 4..6.1, I’m only hitting the home page as a test and not worrying about databases for now (databases will make another interesting series of blog posts!).

AWS Elastic Beanstalk

AWS has a AWS Toolkit plugin for Visual Studio, this allows you to view and manipulate AWS resources

image

It also lets you Publish Applications to AWS by right clicking on the solution and choosing “Publish to AWS”

image

 

Once you choose this option you’ll be presented with a dialog that lets you choose your environment or create a new one.
image

 

If you don’t already have one, lets create one, you will choose a name for the environment

image

 

Next you choose your instance size (the underlying VM size, or any custom Amazon Machine Image you’ve created previously), other options of interest are, use non-default VPC, this is basically the network you’ll be running on, all AWS accounts get a default VPC per region (and if you delete it you’ll need to contact AWS to get it back!). The option of single instance environment is selected here as this is just a test. If i wasn’t running in single instance mode, I would be able to Enable Rolling Deployment to keep my app running while it gets updated (more about that here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rollingupdates.html)

image

 

Lastly we choose the application settings, I’m just deploying a .net 4 runtime debug application.

image

Once you review and finish, you can see your application start deploying on the portal

image

Once it’s finished which can take a few minutes after the upload you should see the Health go Green and you can access your application

image
image

 

Note: If you’re following along and wish to stop this ElasticBeanstalk environment to minimize costs/free tier bandwidth, then please ensure to terminate it from the ElasticBeanstalk section of the console, Stopping the underlying EC2 instance will only serve to signal the autoscaling group it belongs to, to start a new instance and restore the health of this application.

Azure App Service

Now lets deploy this same application to azure. Right click on solution explorer and choose Publish

image

 

Choose to Azure

image

 

Like AWS where we chose a server environment we need to choose an app hosting plan, with Azure you can sign up for a free trial, if you have a subscription you can choose to deploy a free cloud app (you get 10 free per region, there are some limitations which we are not concerned with just now).

image

 

After creating this new hosting plan we arrive back at the publish dialog

imageimage

Visual Studio then starts the publish task and opens the application in your default Visual Studio specified web browser.

image

You can also see your new application seeding life in the Azure portal http://portal.azure.com

image

 

Summary

So in this blog post I’ve run through how to deploy applications to PaaS offerings on AWS and Azure, in the next post I’m going to drill down and and do some more comparing and contrasting of these two applications, stay tuned!

AWS or AZURE

History

So way back circa 2008 I registered for AWS free tier, now back then I was working in a different industry that didn’t have much need for ‘the cloud’ I played with a few linux vm’s during that year but nothing came of it and my trial expired.

Fast forward two years and Azure was born at least in public,, I was immediately sold and was all-in. I’ve used abused and consulted on more Azure projects than I can remember and anytime the subject of AWS came up I dismissed it as a inferior pioneer on cloud tech, I mean just look at the console it’s offending to the eye is it not?

Fast forward another two years and I found myself while heavily swallowing the PAAS cool-aid recommending AWS over Azure to a client, why? Simply because AWS have a managed offering for Oracle and that particular client did not have the knowledge or appetite to manage their own oracle server.

This did open my eye that there might be a bit more to AWS than an ugly console, an opportunity presented itself to become AWS certified and I jumped at it, now as I write this article I can put this lovely logo Solutions-Architect-Associateon my business card.

 

 

 

Learning’s

So what have I learned about AWS in in my quest for certification? Well the console is not nearly as offending as I once believed it to be, in fact I think it’s more practical than that sexy looking new Azure portal, it’s faster to get things done in than constantly sliding those Azure portal blades around the place that’s for sure. As for feature parity, for the most part both platforms tend to support the same features in the general sense but once you drill down differences do start to emerge.

I’ve also decided it about high time that I also get certified in Azure, (underway), this should give me the street cred I need for what I’m going to try achieve and hopefully my findings be as impartial as possible.

Cloud Wars

Starting from my next blog post I’m going to start comparing features on both platforms and outline the pros and cons of each… Stay tuned to what should be a very interesting blog series. Obviously the topics are vast, so, if anyone has any requests please send me an email: b at briankeating.net.