Monday, November 5, 2012

Remote Desktop to Ubuntu in Azure

After having seen a question on the "Connect" site about this, I decided to write a quick blog post on how to set up Remote Desktop (RDP) in Azure for a Linux installation (Ubuntu in this case).

After I wrote the post, my employer decided to post it on their site, so go here to view the post:

Remote Desktop to Ubuntu in Windows Azure

Wednesday, June 27, 2012

Update Azure DB Firewall Rules with PowerShell

If you do Azure development and work from home like I do, you may find yourself repeatedly updating your SQL Azure Firewall settings to allow yourself to work with your database. 

This is especially true if your crappy ISP repeatedly disconnects you throughout the day and assigns you new IP addresses. I don't want to disparage any company in particular, but I really hope there's some bandwidth competition on the horizon.
So, I wrote a PowerShell script to fix this problem.  Now, I just say do the following:


Notice, the script checks the current rule and then replaces it, if needed.  If it is not, it will say "Current Rule Is Correct".

In order to use this script, you will need to download and "import" the WAPPSCmdlets from CodePlex using "Import-Module".

You will also need to fill in the 4 variables at the top of the script. Note that the Certificate you use must be a Management certificate that is installed into the Azure Portal.

#TODO: set these up as parameters (with defaults)
$ruleName = 'Rule Name' #Whatever you want to see in the Azure UI for this Rule
$server = '' # the server name (without the "" on the end)
$subId = '' # The Subscription ID
$cert = (Get-Item cert:\\CurrentUser\My\[YOUR KEY GOES HERE]) #the key is the "Thumbprint" - use "dir cert:\CurrentUser\My" to find yours

function RemoveRule()
    Write-Verbose 'Removing Firewall Rule'
    Remove-SqlAzureFirewallRule -ServerName $server -RuleName $ruleName -SubscriptionId $subId -Certificate $cert | Format-Table

function AddRule($myIP)
    Write-Output 'Adding rule for IP: ' + $myIP
    New-SqlAzureFirewallRule -StartIpAddress $myIP -EndIpAddress $myIp -ServerName $server -RuleName $ruleName -SubscriptionId $subId -Certificate $cert | Format-Table

## Function to retrieve external IP address.
## the external address is retrieved from the
## title header of the webpage ""

function Get-ExternalIP {
    $source = ""
    $client = new-object System.Net.WebClient
    #add the header that suggests from (
    $client.Headers.Add('user-agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0')

function Update-AzureDBFirewallRules
    # get current IP.
    $currentIp = &Get-ExternalIP
    $rules = (Get-SqlAzureFirewallRules $server -SubscriptionId $subId -Certificate $cert)
    $curRule =  $rules | ? {$_.RuleName -eq $ruleName}
    $shouldAdd = $false

    if ($curRule -ne $null) {
        $ruleIp = $curRule.StartIpAddress
        if ($ruleIp -ne $currentIp ) {
            Write-Warning "Current IP [$ruleIp] is incorrect, removing Rule"
            $shouldAdd = $true
        else {
            Write-Output 'Current Rule is Correct'

    if ($shouldAdd -eq $true)
        &AddRule ($currentIp)

Hope this helps some of you out there...

Thursday, May 17, 2012

Developing for, but remaining decoupled from Azure

(This started out as a Yammer discussion post, but got obviously way too long for that, so I decided to blog it instead.)

A coworker had mentioned that there could be difficulty in developing for Azure while maintaining independence from it by decoupling from direct dependencies.  Having done this from the beginning in my almost 2-year old Azure project, I decided to tell how I did this.

Why decouple?

First of all, why do we need to do this?

Well, I have developed for a very wide range of platforms in my past, including dozens of flavors of *nix, tons of versions of Windows (3.0, 3.1, 3.11, 95, 98, 98 SE, NT 3.X, 2000, ME, XP, Vista, Win 7), QNX, PSOS, OS/2, Mac OS, etc.

Given that, I've always had issues with tying my code too tightly to a platform.  It always seems to come back and bit you. Hard.


If you have ever tried to develop and debug with the Azure Dev Fabric, you probably didn't even ask this question in the first place.

Working with the Dev Fabric is painful, especially in the Web development side.  Instead of being able to edit the HTML and JavaScript files while you debug, each edit requires a rebuild and redeploy.

If you're not familiar with this process, what is actually happening when you debug a "Cloud" app is a full deployment.  Visual Studio is compiling the code, then, upon success, it packages a mini-azure deployment and deploys it to the Dev Fabric.  This involves spinning up temporary Web Sites on your IIS installation and spinning up what are effectively mini Virtual Machines.  So, in order to get any code changes over there, you must redeploy.

Bottom Line: no easy tweaks to your HTML, CSS or JavaScript while debugging.

Platform Independence

Sometimes, you may not have 100% buy-in from a client or a manager for running in Azure, so, in order to hedge your bets, you need to have the ability to run on Azure without depending on it, in case someone makes a decision to go the other way.  Maybe the client hires a new CIO and he wants to run on Amazon because Jeff Bezos is an old college buddy.  Crazier things have happened.

In my case, my project was, I believe, the first Azure project our company deployed into Production.   Thus, we were hedging our bets a bit by allowing it to run on a regular on-premise deployment.  This had the added benefit of allowing us to do pilot/demo deployments on-site for our client so they could view the site during early development.

What to decouple?

First off, there are three main areas that need to be decoupled in order to maintain your freedom, Configuration Settings, Images/Static Files and Data (SQL, etc)


Regular .NET apps use AppSettings from either web.config or app.config for simple key/value settings.  For this scenario, a simple interface will do:

public interface IConfigurationSettingsProvider
    string GetSettingValue(string key);
Non-Azure Implementation
using System.Configuration;

public class ConfigurationManagerSettingsProvider
    : IConfigurationSettingsProvider
    public string GetSettingValue(string key)
        return ConfigurationManager.AppSettings.Get(key);
Azure Implementation
public class AzureRoleConfigurationSettingsProvider
    : IConfigurationSettingsProvider
    public string GetSettingValue(string key)
        return RoleEnvironment.GetConfigurationSettingValue(key);
As you see, this isn't really all that complicated


In my case, the images we're serving up generally are based on an "Item Key" and can either be referring to a "normal" image size or a "thumbnail" image.  We also needed the ability to store the images and retrieve them.

The implementation of my IImageStorage interface also actually takes care of scaling the images to the appropriate sizes during upload.

The particulars of your situation may vary, but the pattern could be similar.
NOTE: I removed a few overloads of methods here (for instance, each "Save" method has an implementation that takes a "Stream" and proxies it to the byte[] after reading the Stream in)

public enum ImageSize
public interface IImageStorage 
    void SaveImage(int imageId, string pathInfo, byte[] imageBytes); 
    // removed overloads 
    // ... 
    Uri GetImageUri(int imageId, ImageSize size); 
    IEnumerable<int> GetAllImageIds();
    void DeleteImage(int imageId); 

Non-Azure Implementation

In this case, I chose to make a "FileSystemImageStorage" class that uses a directory structure of


Each file name is the "ID" with ".jpg" in the end.

The urls returned are either "file:///" or "data://" (base64-encoded) urls based on the size.

Azure Implementation

For Azure, the pattern is very similar.  I created a "BlobStorageImageStorage" class.  I use 3 Blob Storage Containers called "normal", "thumbs" and "original" which each contain a blob with the image inside it.

In this case, I chose to make the name the ID zero-padded to 10 spaces in order to allow for long-term expansion.  So, ID 1234 would be "0000001234".  So, my url for the thumbnail for 1234 is:

http:// [accountname]


For a "typical" application, data usually means SQL.  In the case of SQL, the only difference as far as your application is concerned is a connection string, which is easy enough to abstract for yourself.

If, on the other hand, you are using Azure Table Storage, you will need to have some sort of Repository interface that will persist TableStorageEntitites for you.

In our application, we decided to allow the Entities to subclass TableStorageEntity, regardless of whether or not we're storing them to Azure.

Given that, we have this class (keep in mind that I've removed error checking, comments and a few other minor details - and the code patterning is kinda strange too in this format):

public class TableServiceEntityRepository<TEntity>
    where TEntity : TableServiceEntity
    readonly string _tableName;
    readonly TableServiceContext _context; 
    public TableServiceEntityRepository(
        ITableServiceContextFactory ctxFactory)
        _tableName = typeof(TEntity).Name;
        _context = ctxFactory.Create(_tableName);
    public IQueryable<TEntity> Get(
        Specification<TEntity> whereClause = null,
        <TEntity>, IOrderedQueryable<TEntity>
            > orderBy = null)
        IQueryable<TEntity> query =
        if (whereClause != null)
            query = query.Where(whereClause.IsSatisfiedBy());
        if (orderBy != null)
            return orderBy(query);
        return query;
    public virtual void Insert(TEntity entity)
        _context.AddObject(_tableName, entity);
    public virtual void Update(TEntity entity)
        var desc = _context.GetEntityDescriptor(entity);
        if (desc == null || desc.State == EntityStates.Detached)
            _context.AttachTo(_tableName, entity, "*");
    public virtual void Delete(TEntity entity)
        var desc = _context.GetEntityDescriptor(entity);
        if (desc == null || desc.State == EntityStates.Detached)
            _context.AttachTo(_tableName, entity, "*");
    public virtual void SaveChanges()

If you notice, this code uses a Specification for a "whereClause".  That is defined this way: 

public abstract class Specification<TEntity>
    private Func<TEntity, bool> _compiledFunc = null; 
    private Func<TEntity, bool> CompiledFunc 
            if (_compiledFunc == null) 
                _compiledFunc = this.IsSatisfiedBy().Compile(); 
            return _compiledFunc; 
    public abstract Expression<Func<TEntity, bool>> IsSatisfiedBy();
    public bool IsSatisfiedBy(TEntity entity) 
        return this.CompiledFunc(entity); 

The non-Azure version of this just uses the TableServiceEntity's PartitionKey and RowKey fields for folder name and file name and uses Binary Serialization to serialize the files.  It's pretty simple, but wouldn't work for very large numbers of entities, of course.  You could, however choose to do the same thing with SQL Server using a SQL Blob for the serialized data for a quick speed improvement.

Though we have other implementation abstractions in our software for things like Queues, this should give you a good kick start on how things are done.


Abstracting things from specific hardware/platform dependencies is usually a Good Thing in my opinion.  Stuff Happens.  Platforms break/die/become uncool, etc.  We shouldn't have to refactor significant portions of applications in order to be more Platform agnostic.

Feel free to comment or contact me for more details on how things like this can be done, or any questions on my implementation of the Specification pattern or anything at all :)

Friday, April 20, 2012

Solution to 0x80007002 error installing Zune 4.8

I just bought a Nokia Lumia 900 only to find that Microsoft took a page from Apple's book and required the installation if iTunes Zune software in order to upgrade the phone. Ugh!

So, I download the installer and, of course, get 0x80007002 error installing the software. It tells me:
The media for installation package couldn't be found. It might be incomplete or corrupt
Uh… What? It's a Self-Extracting EXE!

Here's what I tried:

  • Download the "full version" (first one downloaded the installer to temp files) –  FAIL
  • Ran a Microsoft "FixIt" app that was supposed to .. you know.. Fix It? –  FAIL
  • Turned off the Firewall –  FAIL
  • Yelled at the Screen –  FAIL
  • "Unblocked" the EXE file and re-ran the installer –  FAIL
  • Decided to get insanely stupid and unzipped the file myself and ran the contained installer – Success!? (*facepalm*)

Whiskey. Tango. Foxtrot? 

Apparently their "self-extracting" EXE is not very good at the job. Can't blame them, since self-extracting zip files is a really recent innova--- Wait! That technology has existed for decades!!

Come on guys! Maybe you need a refresher course!
("It's all ball bearings nowadays!")

So.. if you run into this, just unzip the thing your damned self and call it a day!

Sunday, March 18, 2012

JsTrace 1.3 Released

Get it while it's hot!

I just added a few features to the JsTrace code:
  • New passthrough methods on the Trace object for the following: assert, clear, count, dir, dirxml, exception, group, groupCollapsed, groupEnd, profile, profileEnd, table, time, timeEnd, trace
    • Note that the Trace script does nothing but pass these through if they exist
  • Ability to get/set the default trace level with the Trace.defaultTraceLevel method
  • Ability to reset some or all of the currently configured trace levels to the defaults by calling the Trace.resetToDefault method.
  • New Jasmine tests to make sure it works (mostly)
On the MVC side, I added a feature to the JsTrace.MVC package:
  • The ability to force logging to the server to be synchronous in order to preserve the order of messages coming back to the server.
    • It was either that or adding message numbers, and this was quicker.
NuGet packages are updated:

Or browse the code here:

Happy debugging!

Saturday, March 17, 2012

Disable Minification with MVC 4 Bundles

I decided to play around with a bunch of new technologies to dive in and learn today. I decided to write a basic blog app (yeah, boring, I know) using the following tools:
    (TL;DR? Click Here)

Thursday, March 15, 2012

How To: Download Azure Publish Settings File

So, as of Azure SDK 1.6 (I think), Microsoft added the ability to import your publishing settings for Azure into Visual Studio to make deployments easier
This then launches your browser and has you login to your Live account.
Then, it downloads the file automatically and imports it, saving you time.
I installed Azure Diagnostics Manager Version 2 and it has a neat feature to allow you to import a Publish Settings File.  So, off I went to try to download it.  I found NO documentation on how to get it.
By some spelunking through browser history, I found this magical link:
This will allow you to download the file and do whatever you want with it, including import it into ADM.
WARNING: This has sensitive information in it to allow you to publish to your Azure account.  I recommend you delete this file after you import it into ADM

Sunday, March 11, 2012

Quick Tip: VirtualBox Win 8 Custom Resolution

1680x1050, 1600x1200, 1280x1024, 1152x864, 1024x768, 800x600

I'm not sure what the true root cause of the problem is, but, when running Windows 8 Customer Preview in VirtualBox, the set of allowed resolutions is pretty pathetic.

I don't even know where / why anyone would buy a 4:3 ratio monitor anymore, so I don't know why there aren't a bunch of supported 16:9 options in here.

In order to get around this, there's a command that, though simple, needs to be done from the command line (after shutting down your virtual machine)

So, if you go to your Virtual Box Folder (usually C:\Program Files\Oracle\VirtualBox) and run the following command, you should be OK:


Then, when you go back into your machine, you should see the new resolution listed for use.

I'm not sure how to make this resolution available in every subsequent virtual machine, but I'm glad it's available in this one.

Thanks to Dustin over at for what amounts to all this information in this articleSmile


Sunday, March 4, 2012

Released: JsTrace.MVC

I just released the JsTrace add-on JsTrace.MVC as a NuGet package.

What is it?  It's a way to automatically proxy JsTrace messages from client-side JavaScript to your MVC application.

Basically, when you add this package, you get the following Area added to your ASP.NET MVC project:

The JsTraceController has a single "Index" method that receives a JsTraceMessage object.  All that contains is the module, level and message sent from the JavaScript side.
Given the following javascript:
var tracer = new Trace('TestModule');
tracer.error('testing: error message');
The default implementation just outputs to the console using Debug.WriteLine:
   JsTrace >> [error] Module 'TestModule' : testing: error message
All you need to do is use the included HtmlHelper extension method like this, depending on your syntax:
<%: Html.RenderJsTraceProxy() %>

By default, the rendered script will proxy only messages that pass the “switch test”. If you pass “true” in the RenderJsTraceProxy(), it will send all of them, which might be a bit crazy in production.

The script uses the jQuery $.ajax() method to post a JSON object to the server asynchronously -- ignoring success or error as well.

I may have a follow up version that has more options for the Proxy, like being able to pass a predicate-style (returns boolean) method to determine whether or not to send the message to the server.  This way you could customize the logic yourself.

Check it out, and let me know what you think!


Monday, February 27, 2012

JsTrace 1.1.1 released

So, I released a new version of the JsTrace project.

This version basically includes rewriting things to be a little more "proper" in the JavaScript side of things. 

  1. I removed things like the "with" keyword (reimplemented just using basic closure technique)
  2. Turned on/Fixed a lot of JSHint restrictions (though I still don't dig the "use one var statement" thing, so I don't follow that – it's totally unnecessary in my opinion).
  3. Updated the Intellisense comments so that we have as much info in client code as possible
  4. Updated source code to have a test project and the NuGet package generation in it.

So, if you're using the NuGet package, you'll just get the update automatically, otherwise, go snag it on Codeplex:

Happy Debugging!


Saturday, February 18, 2012

Released: JsTrace - My JavaScript Diagnostics Module (NuGet Package too!)

I've been planning on this for a while, but, after my original post post about JsTrace, I wanted to make it easier for people to get to and use.

So, first of all, I added JsTrace to CodePlex.

Then, I wrote some pretty decent documentation for it.

Finally, I published it as a NuGet Package.

Use it at will and spread the word!

Wednesday, February 8, 2012

Just Published: TFS Solution Info - Visual Studio Extension

The Problem

I deal with Team Foundation Server (TFS) a lot day to day, so I occasionally find myself wondering which Branch and/or Workspace I happen to have open on my project at any given time.  For some people, this is not a problem, since they work on one particular branch, do their work and then check in.  For me, though, it’s different.
I do parallel development on my project.  Sometimes I’m working on a great new feature for 2.0 while switching over to a HotFix branch to do a quick patch for the 1.5 version that’s in Production.
I also use a separate workspace in TFS to Merge and deploy code.  Throw in the “Experimental” Workspace I use to play around in various versions of the code, and you can see why I frequently get lost.

The Solution

(no pun intended! -- OK, I confess: It was totally intended!)
Banking on the theory that I’m not alone in this – and, based on my coworker feedback, I’m not – I decided to publish the Visual Studio plugin I created.
For lack of a better name, I called it “TFS Solution Info”
Here’s what it looks like


It’s pretty simple.  Using the Visual Studio SDK, I connected up to Solution events and TFS-related information to display information about the currently loaded Solution.

I can’t believe how useful this has been for me.  I hope it’s useful for you too!

Get TFS Solution Info from the Visual Studio Gallery!

Or, you can just search for “TFS Solution Info” in the Visual Studio Extension Manager.  The code will be posted up on Codeplex very soon.

Let me know what you would like to see this do.

[Update: just updated the formatting]

Friday, February 3, 2012

Pro Tip: Code Analysis, IoC and Excessive Class Coupling

This is a quick tip:

If you’re using Visual Studio's Code Analysis to make your code better (and who isn’t?), you’ll obviously have a bit of a problem with any IoC bootstrapping because you’re most likely wiring up a LOT of interfaces and classes together in your bootstrapping logic.

This will result in the CA1506 "Avoid Excessive Class Coupling" warning

SuppressMessage to the rescue!

Just decorate your IoC bootstrapper class or method with this:
public static class IoC
Problem Solved!

Sunday, January 29, 2012

Debug.Assert() – So, what about Release Mode?

Sometimes, as a developer, you create some little nifty tool to make life easier and you forget that other people may not have come up with that idea/solution and are still struggling with a problem you solved ages ago. 
I was having a Skype conversation with Pete Brown and he said “Why don't you blog this stuff, or set up a GitHub project or something with these things?”.  I told him, “well, actually, I just started blogging!”  
That conversation gave me a figurative kick in the tuchus, so I decided to share a small nugget.

Defensive Development

If you know me you know that I am a big fan of code instrumentation and “defensive development”. I think it’s the best way to protect yourself.  To me, developing defensively means that you are doing the following in your code:
Generally, you’re trying to “bullet-proof” your code.
Given that I think like this, I tend to use debug assertions a lot.

Saturday, January 28, 2012

My project featured as a Microsoft Case Study

It’s kind of neat seeing your own project featured as a Microsoft Case Study.

Of course, now I want to update it with more information, since a lot has happened since we gave them all that information.  The project has actually had a direct effect on the company’s quarterly profits and is now going to be rolled out internationally (in phases). 

Anyway, wanted to pause to toot my own horn for a sec.  Toot! Toot! Winking smile password idiocy

Back Story

I use Keepass to maintain all my passwords. Then, I have this password database stored in a USB key I carry with me.  I only have to remember one highly-entropic password (doesn’t mean it’s hard to remember, it just has to be hard to guess). So, I try to make my other passwords as complicated as possible.

Changing My Password on

Anyway, I went to change my password on live,com like I prefer to do on a semi-regular basis.  I got to their page and entered a new password.  I tried one that was 256 characters long, but I noticed that, with no warning at all, it truncated my password at 16 characters.
So, they have a minimum of 6 characters (which is on the page) and an apparent maximum of 16 characters.  So, I changed the options to generate a 16-character password with all sorts of variability in the characters used and I try again: (This time it was: “{jƒ/ýQîÔ·z4Ú«<[“)
Now, I get this lovely message:

What’s Missing?

Apparently, the powers that be at Microsoft ( aren’t going to bother telling me what characters are and are not allowed!  So, what?  I just sit here and guess for hours?
I was reminded of this great xkcd comic. This is not security!  It’s obscurity.  Making a person choose a password within 6-16 characters with a fixed subset of characters allowed just trims down the dataset that a computer needs to search while trying to crack the password. At the same time, requiring special characters (but not that special!) to be there in order to make it strong just makes it harder for humans to guess with the wonderful side effect of also making them nearly impossible to remember!
Well, I’m going to go keep trying passwords over and over again until I find one it will allow…ugh

Thursday, January 26, 2012

ASP.NET MVC OutputCache override

IE Strikes Again!

Because of my recent discovery of the IE9 “feature” that makes it cache unlike every other browser, I had to come up with a small change to our caching scheme on my project. To be fair, the IE9 team is actually well within RFC definitions of how to handle “cache-control: private”, so I shouldn’t complain too much about it. Though, why is it always IE that causes me to do extra coding?

Anyway, my solution was to add a “Default” OutputCache setting for all my MVC Controllers and then, to override these options on individual Controller Actions when needed.  The problem is: I can’t find any information on whether or not an OutputCache directive on an Action will override one on the Controller.

So, I wrote a quick test app to test it out. 

The Code

    // default everything to a tiny cache time
    public class HomeController : Controller
        public ActionResult Index()
            ViewBag.Message = "Welcome to ASP.NET MVC!";
            return View();

        // change to a longer cache time
        [OutputCache(CacheProfile = "LongCache")]
        public ActionResult About()
            return View();

Web.config mods

            <add duration="1" name="ShortCache" varybyparam="*">
            <add duration="60" name="LongCache" varybyparam="*">


Well, What do you know!? It worked! Hopefully someone else stumbles across this and saves themselves some time.

Monday, January 16, 2012

Silverlight 5 Toolkit, 3D and crappy installers..

   I'm in the rather grueling awesome process of tech reviewing my friend Pete Brown's new book, Silverlight 5 In Action.  As a part of that, I had to get setup with the Silverlight 5 Toolkit in order to get the new 3D templates. 

   Frankly, I'm rather excited at the prospect of playing around with 3D stuff in Silverlight.  We've come a very long way since the days of just displaying simple things like a clock in a browser plugin.

   So, I install the toolkit and start up Visual Studio 2010 and choose a Silverlight 3D Application


I called my project “Silverlight3dExample”.  Now, Visual Studio fires up the hamsters in their wheels and all the other rube goldberg-esque machinations inside and comes up with this lovely tandem of errors:



So, after a bit of googling, I realized, much to my chagrin, that I neglected a rather important notice on the screen of the Toolkit:

Note: You must install XNA Studio in order to use the new Silverlight 3D templates.Otherwise the new templates will not show up.

Oh.. uhh.. oops.  Seems kind of odd that such a big missing prerequisite isn’t also detected by the installer, but, whatever…

So, I download XNA studio and install that… more whirring, banging from my machine ensues.  Then, just in case things won’t recover gently, I decide to create the solution from scratch again.  After all, I hadn’t done anything yet.

Woohoo! all 4 projects successfully created!


Sweetness, right!?  So, I hit Ctrl-Shift-B (build command for you mouse-using pansies out there) and voila, everything builds!  Just kidding!  Now, I get this obvious and very helpful error:

Compile error -2147024770

(0, 0): error : Unknown compile error (check flags against DX version) D:\Dev\Book…Silverlight3dExample\Silverlight3dApp\CustomEffect.slfx

Umm.. OK.. Googling… googling… uh… yeah… Apparently, there’s another prerequisite missing that the installer also doesn’t check for: DirectX 9.  So, I download that and install (cringing at every single “installing XX update from 2006” message – how can that really not break my machine?).

Finally, it actually builds!  No thanks to the Silverlight Toolkit guys (well, some thanks, since they wrote it in the first place!), I can now continue to review this chapter of what really is a pretty killer book, you should buy it. Now.

Tuesday, January 10, 2012

Yes, I'm 12...

So, I moused over my explorer windows on my taskbar and this popped up:

Yep.. giggled like a 12 year old and told my friends about it :)