This demonstrates what is involved in writing and reading some text to an Azure and an AWS blob.
What i set out to achieve was to demonstrate how to read and write some text to a blob with the SDKs. Just to make it a little more interesting I decided to use .NET for the reading and Java for the writing.
Obtaining the SDKs
Adding the SDKs was a seamless process, for .NET Nuget was used and for Java Maven was used
|.Net ||Java |
| || |
Both SDK’s were trivial to install and use, the Azure SDK’s suited my use case a little better in that they didn’t need me to deal with files in my Application code (I expect text is not a mainstream use case).
AWS as always relies on the region being specified which I can’t say I like that much.
It’s been a busy year and my blog posts have suffered while I’ve been preparing and sitting these exams. But I’ve got them both now, so expect my Azure vs AWS series of posts to continue.
So out of the blue I found myself giving Azure Media Indexing a trial run, for no other reason other than I could, this is why I love cloud tech so much, it brings something that would have been very difficult 5-10 years ago, within reach of anyone with an public cloud account.
AWS vs Azure
Both AWS and Azure have media services, typical used to manage digital media and serve it up to consumer playback devices at scale.
AWS has the the Elastic Transcode and Azure has Azure Media Services, however only Azure has the ability to dig into audio or media files and extract the text within.
Azure Media Indexer
Azure Media Indexer enables you to make content of your media files searchable and to generate a full-text transcript for closed captioning and keywords. You can process one media file or multiple media files in a batch. Have a look at this post for some details on how to do it from code https://azure.microsoft.com/en-us/documentation/articles/media-services-index-content/
The code uploads a file then starts an indexing job, then downloads the results:
Note: The source code seen above has a typo, I’ve submitted a pull request to hopefully this will be fixed, but easy to spot.
Also the download part failed with an exception for me so i just pulled it down with a little bit of code on a second pass.
The above code is possibly all you need if you wish to upload content and start the indexing job manually with the old portal.
On your media account upload some content
Once the content is uploaded, start the indexer process, set a good title as Azure will reach out to the interweb and use it to seed the language extraction.
There is no way to download the output from the portal so use use the code i shared above to download the content.
I processed the latest podcast at time of publishing from https://www.dotnetrocks.com/
In hindsight it was possibly not the best podcast to index as it was recorded live @build (i expect, i’m two episodes behind on DNR this week so have not listened to it yet), the DNR guys typically have exceedingly good audio, at some stage it might be worth indexing another episode.
You can find the results here here , initially my knee jerk reaction was , “ah this is poor” but after reflecting on it I’m blown away by what was done and so so, sooo easily!
With a bit of editing this can be thrown into Azure Search / Sql Server etc for full text search and direct seek media playback.
See for yourself:
For sure it needs some editing, e.g.
I release the eleventh music decode by
should in-fact be
I released the eleventh music to code by
but what a great start!!!
The public cloud is fantastic for numerous reasons, if you’re not faced with some restriction such as where you data lives or other factors, then my advise is get away from private clouds and get to the public clouds as fast as your legs can carry you!
However once you’re there it’s not all plain sailing, if you let a team of people loose to play with with all these new toys, on the back of your company’s credit card, then costs can start to accumulate very quickly!
Sometimes, your VM’s are not being used for production and what invariably happens, is that these machines get forgotten about or are left running for no good reason, now while there are a few ways to capture such scenarios, what I’ll show you now is a very quick way of scheduling those known VM’s to shutdown (or start up) as on a predefined schedule,
For AWS the easiest way of scheduling a single standalone VM to shutdown is to use the AWS Data Pipeline service.
Lets quickly show the workflow:
1) Create new Pipeline with CLI Command
2) Enter the Stop EC2 CLI commands
Note: This field only shows as one line of text vertically in chrome so I modified to styles to show the full command.
You can see that i have two different stop commands, I could combine these into the one command with the two IDs however if one fails then they both fail, this can be problematic if for example an Instance gets terminated.
4) Set log file bucket
5) Select role
Choose custom and then select the two defaults.
Security Note: Roles needs to be configured to allow Data Pipeline access to your VM’s, please see here: https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-ec2-instances/
That’s it, you now have a scheduled task that will switch off your vm’s nightly. It should be noted that this will start a micro data pipeline ec2 instance VM with a default run time of 50 minutes, so you need to ensure the end justifies the means, better yet reduce the run time by editing the workflow to e.g. 15 minutes.
In order to achieve the same results with Azure we are going to select Azure automation,
If you’re familiar with Azure you will know that there are currently two ways of creating VM’s, the classic approach and the RM (resource manager approach). In this post I’ll show you the RM approach, but feel free to substitute classic in it’s place with a nearly identical approach.
1) Open or create an Azure Automation account.
2) Edit Assets
Add a variable for the AzureSubscriptionId you’ll be using
Select your service principle account, you’ll have to search for it to appear.
We have two options now, we can either use some powershell or some graphically defined workflows, let’s do this with a graphical version, we don’t need to create this, we simply import from the gallery.
After importing choose Edit on the runbook
4) Set inputs
Then we set the two Assets we provided earlier and optionally a ResourceGroupName (to stop all vm’s in a resource group) or a VMName The “Auto” you see above isn’t a keyword, it’s my badly named ResourceGroup.
6) Set schedule
Go back to the Runbook and choose schedule
With the schedule you can specify any of the input parameters and override the defaults if you so wish.
Security Note: Much the same as Azure you’ll need to ensure you’ve permission to access the VM’s from Azure Automation, the best option is to create a SecurityPrinciple application. See: https://azure.microsoft.com/en-us/documentation/articles/resource-group-create-service-principal-portal/
While it does look like the Azure approach is much more convoluted it is much more powerful, e.g. it is very easy to extend the azure run book to check all VM’s for a “Production” tag and only shutdown vm’s if they are not production (because that would be bad right!); with AWS, we are simply relying on a feature of Data Pipeline that allows us to run simple cli commands.
Pricing is much of a much-ness between each, with Azure you can run for free (to a limit)
AWS the 15 mins with a micro instance is not even worth worrying about.
As promised, hereby the first instalment of the AWS vs Azure blog post saga, again I’m trying to remain impartial throughout.
What I intend to outline is at this stage is the show to get started deploying a new application to AWS and to Azure from within Visual Studio. I’m sure there are those of you that are shouting, “.NET, Visual Studio, Azure? Of course Azure will do it better!!!” however rest assured this is only the first of a few posts related to Azure App Service and AWS elastic beanstalk and AWS doesn’t fair all that badly.
The sample application in this case is just a File/New ASP MVC5 project using .net 4..6.1, I’m only hitting the home page as a test and not worrying about databases for now (databases will make another interesting series of blog posts!).
AWS Elastic Beanstalk
AWS has a AWS Toolkit plugin for Visual Studio, this allows you to view and manipulate AWS resources
It also lets you Publish Applications to AWS by right clicking on the solution and choosing “Publish to AWS”
Once you choose this option you’ll be presented with a dialog that lets you choose your environment or create a new one.
If you don’t already have one, lets create one, you will choose a name for the environment
Next you choose your instance size (the underlying VM size, or any custom Amazon Machine Image you’ve created previously), other options of interest are, use non-default VPC, this is basically the network you’ll be running on, all AWS accounts get a default VPC per region (and if you delete it you’ll need to contact AWS to get it back!). The option of single instance environment is selected here as this is just a test. If i wasn’t running in single instance mode, I would be able to Enable Rolling Deployment to keep my app running while it gets updated (more about that here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rollingupdates.html)
Lastly we choose the application settings, I’m just deploying a .net 4 runtime debug application.
Once you review and finish, you can see your application start deploying on the portal
Once it’s finished which can take a few minutes after the upload you should see the Health go Green and you can access your application
Note: If you’re following along and wish to stop this ElasticBeanstalk environment to minimize costs/free tier bandwidth, then please ensure to terminate it from the ElasticBeanstalk section of the console, Stopping the underlying EC2 instance will only serve to signal the autoscaling group it belongs to, to start a new instance and restore the health of this application.
Azure App Service
Now lets deploy this same application to azure. Right click on solution explorer and choose Publish
Choose to Azure
Like AWS where we chose a server environment we need to choose an app hosting plan, with Azure you can sign up for a free trial, if you have a subscription you can choose to deploy a free cloud app (you get 10 free per region, there are some limitations which we are not concerned with just now).
After creating this new hosting plan we arrive back at the publish dialog
Visual Studio then starts the publish task and opens the application in your default Visual Studio specified web browser.
You can also see your new application seeding life in the Azure portal http://portal.azure.com
So in this blog post I’ve run through how to deploy applications to PaaS offerings on AWS and Azure, in the next post I’m going to drill down and and do some more comparing and contrasting of these two applications, stay tuned!
So way back circa 2008 I registered for AWS free tier, now back then I was working in a different industry that didn’t have much need for ‘the cloud’ I played with a few linux vm’s during that year but nothing came of it and my trial expired.
Fast forward two years and Azure was born at least in public,, I was immediately sold and was all-in. I’ve used abused and consulted on more Azure projects than I can remember and anytime the subject of AWS came up I dismissed it as a inferior pioneer on cloud tech, I mean just look at the console it’s offending to the eye is it not?
Fast forward another two years and I found myself while heavily swallowing the PAAS cool-aid recommending AWS over Azure to a client, why? Simply because AWS have a managed offering for Oracle and that particular client did not have the knowledge or appetite to manage their own oracle server.
This did open my eye that there might be a bit more to AWS than an ugly console, an opportunity presented itself to become AWS certified and I jumped at it, now as I write this article I can put this lovely logo on my business card.
So what have I learned about AWS in in my quest for certification? Well the console is not nearly as offending as I once believed it to be, in fact I think it’s more practical than that sexy looking new Azure portal, it’s faster to get things done in than constantly sliding those Azure portal blades around the place that’s for sure. As for feature parity, for the most part both platforms tend to support the same features in the general sense but once you drill down differences do start to emerge.
I’ve also decided it about high time that I also get certified in Azure, (underway), this should give me the street cred I need for what I’m going to try achieve and hopefully my findings be as impartial as possible.
Starting from my next blog post I’m going to start comparing features on both platforms and outline the pros and cons of each… Stay tuned to what should be a very interesting blog series. Obviously the topics are vast, so, if anyone has any requests please send me an email: b at briankeating.net.
So I’ve been looking at an issue for a client today where by an application working perfectly well on most browsers was failing on internet explorer 11. Users were presented with the following error:
|I think we can all agree that it’s not very helpful. |
The problem was this particular application has a massive code base, so it was hard to identify where to start given no other information was furnished by IE.
In order to gain insight in what was failing I pressed the Debug button and let Visual Studio 2015 grab as much information as it could from the Microsoft Symbol servers only to be presented with the following:
Reading between the lines
Now I’m not an assembly man, and i say that at the detriment of a future role that has it as a nice to have, I’d rather gouge my eyes out than mess with assembly, that said, looking at the assembly above it it was clear that the issue was related to style sheets / css.
This allowed be me to narrow in on the offending code, and I quickly seen that the following line was causing the problem:
It appears IE11 doesn’t like this, the solution for my client was to render the correct css serverside and now it’s working perfectly well for them.
Heading on nearly 20 years into my professional IT career I can honestly admit that this is the first time assembly saved my bacon!
|(but I’d still rather go blind ) |
In this post I show you how to generate a shell for a web applications in both Java and .NET, while they are not directly a one to one mapping I think some of you will find this interesting should you have never created a web application on either stack or perhaps just one of the below mentioned.
I’ll let the videos speak for themselves.
I did promise to dissect both projects however I ran out of time,
I’ll leave it to you to pick your tech stack of choice and if you have any questions just ask below and I’ll explain in more detail if necessary.
So I’ve started using OData in anger and pretty much immediately stumbled on a problem when using Data Transfer Objects (DTOs). This post explains that problem and the solution.
The following error is encountered when trying to access the exposed entity by key:
No routing convention was found to select an action for the OData path with template '~/entityset/key'.","type":""
Here I show the simple entity I’m exposing
Here you can see that the underlying timesheets are just projected using Automapper to their DTO counterparts
Here I show the automapper configuration (not that it’s makes any difference to the problem encountered)
To fix this problem I needed to set the EntityType.Name property on my OData entity type.
And thereafter, success!
If you ever come across the problem of the IntelliJ Application Servers menu greyed out like this:
This is simply because you need at least one Run Configuration.
I’m using JBoss just now so here’s what I do to add a run config:
Once this is done you can now see your application servers tools window menu item becomes enabled.