Showing posts with label azure. Show all posts
Showing posts with label azure. Show all posts

Wednesday, June 27, 2012

Update Azure DB Firewall Rules with PowerShell

If you do Azure development and work from home like I do, you may find yourself repeatedly updating your SQL Azure Firewall settings to allow yourself to work with your database. 

This is especially true if your crappy ISP repeatedly disconnects you throughout the day and assigns you new IP addresses. I don't want to disparage any company in particular, but I really hope there's some bandwidth competition on the horizon.
So, I wrote a PowerShell script to fix this problem.  Now, I just say do the following:

image

Notice, the script checks the current rule and then replaces it, if needed.  If it is not, it will say "Current Rule Is Correct".

In order to use this script, you will need to download and "import" the WAPPSCmdlets from CodePlex using "Import-Module".

You will also need to fill in the 4 variables at the top of the script. Note that the Certificate you use must be a Management certificate that is installed into the Azure Portal.

#TODO: set these up as parameters (with defaults)
$ruleName = 'Rule Name' #Whatever you want to see in the Azure UI for this Rule
$server = '' # the server name (without the ".database.windows.net" on the end)
$subId = '' # The Subscription ID
$cert = (Get-Item cert:\\CurrentUser\My\[YOUR KEY GOES HERE]) #the key is the "Thumbprint" - use "dir cert:\CurrentUser\My" to find yours

function RemoveRule()
{
    Write-Verbose 'Removing Firewall Rule'
    Remove-SqlAzureFirewallRule -ServerName $server -RuleName $ruleName -SubscriptionId $subId -Certificate $cert | Format-Table
}

function AddRule($myIP)
{
    Write-Output 'Adding rule for IP: ' + $myIP
    New-SqlAzureFirewallRule -StartIpAddress $myIP -EndIpAddress $myIp -ServerName $server -RuleName $ruleName -SubscriptionId $subId -Certificate $cert | Format-Table
}

## Function to retrieve external IP address.
## the external address is retrieved from the
## title header of the webpage "www.myip.dk"

function Get-ExternalIP {
    $source = "http://automation.whatismyip.com/n09230945.asp"
    
    $client = new-object System.Net.WebClient
    
    #add the header that whatismyip.com suggests from (http://www.whatismyip.com/faq/automation.asp)
    $client.Headers.Add('user-agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0')
    
    $client.downloadString($source)
}

function Update-AzureDBFirewallRules
{
    # get current IP.
    $currentIp = &Get-ExternalIP
    $rules = (Get-SqlAzureFirewallRules $server -SubscriptionId $subId -Certificate $cert)
    
    $curRule =  $rules | ? {$_.RuleName -eq $ruleName}
    
    $shouldAdd = $false

    if ($curRule -ne $null) {
        $ruleIp = $curRule.StartIpAddress
        if ($ruleIp -ne $currentIp ) {
        
            Write-Warning "Current IP [$ruleIp] is incorrect, removing Rule"
            &RemoveRule
            $shouldAdd = $true
        }
        else {
            Write-Output 'Current Rule is Correct'
        }
    }

    if ($shouldAdd -eq $true)
    {
        &AddRule ($currentIp)
    }
}

Hope this helps some of you out there...

Thursday, May 17, 2012

Developing for, but remaining decoupled from Azure

(This started out as a Yammer discussion post, but got obviously way too long for that, so I decided to blog it instead.)

A coworker had mentioned that there could be difficulty in developing for Azure while maintaining independence from it by decoupling from direct dependencies.  Having done this from the beginning in my almost 2-year old Azure project, I decided to tell how I did this.

Why decouple?


First of all, why do we need to do this?

Well, I have developed for a very wide range of platforms in my past, including dozens of flavors of *nix, tons of versions of Windows (3.0, 3.1, 3.11, 95, 98, 98 SE, NT 3.X, 2000, ME, XP, Vista, Win 7), QNX, PSOS, OS/2, Mac OS, etc.

Given that, I've always had issues with tying my code too tightly to a platform.  It always seems to come back and bit you. Hard.

Developing/Debugging


If you have ever tried to develop and debug with the Azure Dev Fabric, you probably didn't even ask this question in the first place.

Working with the Dev Fabric is painful, especially in the Web development side.  Instead of being able to edit the HTML and JavaScript files while you debug, each edit requires a rebuild and redeploy.

If you're not familiar with this process, what is actually happening when you debug a "Cloud" app is a full deployment.  Visual Studio is compiling the code, then, upon success, it packages a mini-azure deployment and deploys it to the Dev Fabric.  This involves spinning up temporary Web Sites on your IIS installation and spinning up what are effectively mini Virtual Machines.  So, in order to get any code changes over there, you must redeploy.

Bottom Line: no easy tweaks to your HTML, CSS or JavaScript while debugging.

Platform Independence


Sometimes, you may not have 100% buy-in from a client or a manager for running in Azure, so, in order to hedge your bets, you need to have the ability to run on Azure without depending on it, in case someone makes a decision to go the other way.  Maybe the client hires a new CIO and he wants to run on Amazon because Jeff Bezos is an old college buddy.  Crazier things have happened.

In my case, my project was, I believe, the first Azure project our company deployed into Production.   Thus, we were hedging our bets a bit by allowing it to run on a regular on-premise deployment.  This had the added benefit of allowing us to do pilot/demo deployments on-site for our client so they could view the site during early development.

What to decouple?


First off, there are three main areas that need to be decoupled in order to maintain your freedom, Configuration Settings, Images/Static Files and Data (SQL, etc)

CONFIGURATION SETTINGS


Regular .NET apps use AppSettings from either web.config or app.config for simple key/value settings.  For this scenario, a simple interface will do:

public interface IConfigurationSettingsProvider
{
    string GetSettingValue(string key);
}
Non-Azure Implementation
using System.Configuration;

public class ConfigurationManagerSettingsProvider
    : IConfigurationSettingsProvider
{
    public string GetSettingValue(string key)
    {
        return ConfigurationManager.AppSettings.Get(key);
    }
}
Azure Implementation
public class AzureRoleConfigurationSettingsProvider
    : IConfigurationSettingsProvider
{
    public string GetSettingValue(string key)
    {
        return RoleEnvironment.GetConfigurationSettingValue(key);
    }
}
As you see, this isn't really all that complicated

IMAGES


In my case, the images we're serving up generally are based on an "Item Key" and can either be referring to a "normal" image size or a "thumbnail" image.  We also needed the ability to store the images and retrieve them.

The implementation of my IImageStorage interface also actually takes care of scaling the images to the appropriate sizes during upload.

The particulars of your situation may vary, but the pattern could be similar.
NOTE: I removed a few overloads of methods here (for instance, each "Save" method has an implementation that takes a "Stream" and proxies it to the byte[] after reading the Stream in)

Interface
public enum ImageSize
{
    Thumbnail,
    Normal,
    Original
}
 
public interface IImageStorage 
{ 
    void SaveImage(int imageId, string pathInfo, byte[] imageBytes); 
 
    // removed overloads 
    // ... 
 
    Uri GetImageUri(int imageId, ImageSize size); 
 
    IEnumerable<int> GetAllImageIds();
 
    void DeleteImage(int imageId); 
} 


Non-Azure Implementation

In this case, I chose to make a "FileSystemImageStorage" class that uses a directory structure of

Root->
      Thumbs
      Normal
      Original

Each file name is the "ID" with ".jpg" in the end.

The urls returned are either "file:///" or "data://" (base64-encoded) urls based on the size.


Azure Implementation

For Azure, the pattern is very similar.  I created a "BlobStorageImageStorage" class.  I use 3 Blob Storage Containers called "normal", "thumbs" and "original" which each contain a blob with the image inside it.

In this case, I chose to make the name the ID zero-padded to 10 spaces in order to allow for long-term expansion.  So, ID 1234 would be "0000001234".  So, my url for the thumbnail for 1234 is:

http:// [accountname].blob.storage.net/thumbs/0000001234

DATA


For a "typical" application, data usually means SQL.  In the case of SQL, the only difference as far as your application is concerned is a connection string, which is easy enough to abstract for yourself.

If, on the other hand, you are using Azure Table Storage, you will need to have some sort of Repository interface that will persist TableStorageEntitites for you.

In our application, we decided to allow the Entities to subclass TableStorageEntity, regardless of whether or not we're storing them to Azure.

Given that, we have this class (keep in mind that I've removed error checking, comments and a few other minor details - and the code patterning is kinda strange too in this format):

public class TableServiceEntityRepository<TEntity>
    where TEntity : TableServiceEntity
{
    readonly string _tableName;
    readonly TableServiceContext _context; 
    public TableServiceEntityRepository(
        ITableServiceContextFactory ctxFactory)
    {
        _tableName = typeof(TEntity).Name;
        _context = ctxFactory.Create(_tableName);
    }
    public IQueryable<TEntity> Get(
        Specification<TEntity> whereClause = null,
        Func<
        <TEntity>, IOrderedQueryable<TEntity>
            > orderBy = null)
    {
        IQueryable<TEntity> query =
            _context.CreateQuery<TEntity>(_tableName);
        if (whereClause != null)
        {
            query = query.Where(whereClause.IsSatisfiedBy());
        }
        if (orderBy != null)
        {
            return orderBy(query);
        }
        return query;
    }
    public virtual void Insert(TEntity entity)
    {
        _context.AddObject(_tableName, entity);
    }
    public virtual void Update(TEntity entity)
    {
        var desc = _context.GetEntityDescriptor(entity);
        if (desc == null || desc.State == EntityStates.Detached)
        {
            _context.AttachTo(_tableName, entity, "*");
        }
        _context.UpdateObject(entity);
    }
    public virtual void Delete(TEntity entity)
    {
        var desc = _context.GetEntityDescriptor(entity);
        if (desc == null || desc.State == EntityStates.Detached)
        {
            _context.AttachTo(_tableName, entity, "*");
        }
        _context.DeleteObject(entity);
    }
    public virtual void SaveChanges()
    {
        _context.SaveChanges();
    }
}

If you notice, this code uses a Specification for a "whereClause".  That is defined this way: 

public abstract class Specification<TEntity>
{ 
    private Func<TEntity, bool> _compiledFunc = null; 
    private Func<TEntity, bool> CompiledFunc 
    { 
        get 
        { 
            if (_compiledFunc == null) 
                _compiledFunc = this.IsSatisfiedBy().Compile(); 
            return _compiledFunc; 
        } 
    } 
    public abstract Expression<Func<TEntity, bool>> IsSatisfiedBy();
 
    public bool IsSatisfiedBy(TEntity entity) 
    { 
        return this.CompiledFunc(entity); 
    } 
} 

The non-Azure version of this just uses the TableServiceEntity's PartitionKey and RowKey fields for folder name and file name and uses Binary Serialization to serialize the files.  It's pretty simple, but wouldn't work for very large numbers of entities, of course.  You could, however choose to do the same thing with SQL Server using a SQL Blob for the serialized data for a quick speed improvement.

Though we have other implementation abstractions in our software for things like Queues, this should give you a good kick start on how things are done.

Summary


Abstracting things from specific hardware/platform dependencies is usually a Good Thing in my opinion.  Stuff Happens.  Platforms break/die/become uncool, etc.  We shouldn't have to refactor significant portions of applications in order to be more Platform agnostic.

Feel free to comment or contact me for more details on how things like this can be done, or any questions on my implementation of the Specification pattern or anything at all :)

Thursday, March 15, 2012

How To: Download Azure Publish Settings File

So, as of Azure SDK 1.6 (I think), Microsoft added the ability to import your publishing settings for Azure into Visual Studio to make deployments easier
image
This then launches your browser and has you login to your Live account.
Then, it downloads the file automatically and imports it, saving you time.
I installed Azure Diagnostics Manager Version 2 and it has a neat feature to allow you to import a Publish Settings File.  So, off I went to try to download it.  I found NO documentation on how to get it.
By some spelunking through browser history, I found this magical link:
https://windows.azure.com/download/publishprofile.aspx
This will allow you to download the file and do whatever you want with it, including import it into ADM.
WARNING: This has sensitive information in it to allow you to publish to your Azure account.  I recommend you delete this file after you import it into ADM

Wednesday, October 26, 2011

Windows Azure AppFabric Cache's Miniscule Features

 

Just wanted to warn/inform people who are considering using AppFabric Caching in Azure…

Here’s a list of things it won’t do:

  • Regions
  • Named Caches
  • Allow you to get a list of cache keys currently in use
  • High Availability Mode (redundant copies across machines for durability)
  • No sliding window expiration
  • No events for things like eviction, add/remove, etc.
    • so you can’t update a local cache with events from the global cache
    • You don’t get notified if something is removed, so you only know when you get “null” back from a Get()
  • No Powershell integration (if you’re into that)
  • No tag support (tagging is basically used for some extra metadata besides the key without pulling the whole object)

So, basically, ALL you get is a simple, Get/Set distributed cache that doesn’t even support the whole set of features you get from the basic ObjectCache in .NET 4.0

They really shouldn’t be calling it “AppFabric Cache” at all, since it doesn’t really support anything that comes with that API

I must say that I’m really disappointed

We waited a LONG time for this feature and got something that could have been coded in a month.

Bad form, Microsoft.  Bad form!

Conflicts Between Versions Of Assemblies

 

I just upgraded a project from the Azure SDK 1.4 to SDK 1.5.  I started getting the "Found conflicts between different versions of the same dependent assembly" warning on the Microsoft.ServiceBus.dll file.

I thought to myself, "Self, I've seen this before.  You're just referencing the wrong copy of an assembly somewhere"

So, I checked every reference to Microsoft.ServiceBus.dll … Hmm… they're all correct. ??

I did a full rebuild… still there..

I did a super-duper clean (close the project, delete every "bin" and "obj" folder, open the project and rebuild)… nope…

Time to get medieval on this stuff… So, I opened the topmost assembly (in this case, my Web application assembly) in ILSpy. I stared looking through each and every referenced assembly until I found the culprit:

image

Microsoft's own Transient Fault Handling library (used for retrying database calls in Azure).  Now, I've contacted the team who wrote that, hoping that they will update their NuGet package.  Otherwise, I'll have to add that ugly Binding Redirect in my config file.

For now, I'll just stare at the warning for a while and hope the NuGet package gets a quick update Smile

Tuesday, September 6, 2011

Preventing accidental deployments to Azure

Your Azure Roles won't start. You're in the painful loop of "Busy", "Stopped", "Busy"... maybe they're reporting themselves as "unhealthy". It's Azure Deployment Hell and we (those of us who do Windows Azure dev) know it well.

Well, one of the situations that casuses this is easy to prevent: Deploying the wrong configuration to Azure.




In the name of pride, we need to avoid this embarrassing time sink and I'll tell you how. On my current project, the production scenario consists of using a "Production" Service Configuration and an "AzureRelease" Build Configuration. So, I want to make sure that these two things are selected when Packaging/Deploying the software.

So, we turn to MSBuild tasks:

Manually edit the ".ccproj" file and insert the following before the tag.
  1. <PropertyGroup>
  2. <PublishDependsOn>
  3. VerifyAzureRelease;
  4. $(PublishDependsOn);
  5. </PublishDependsOn>
  6. </PropertyGroup>
  7. <Target Name="VerifyAzureRelease" Condition="'$(Configuration)' != 'AzureRelease' Or '$(TargetProfile)' != 'Production'">
  8. <Error Text="Should Not Be Deploying non-AzureRelease Code" ContinueOnError="false" />
  9. </Target>
The tag redefines the standard group to include your new "VerifyAzureRelease" <Target> Task where we test for the conditions we want and spit out an error if they are not fulfilled.



Hopefully, this will help some other unfortunate souls out there from going through this problem as many times as I have.

Tom