Handling Poison; Messages, I Mean

A poison message, for the uninformed, is a “queued” message that can’t be processed for one technical reason or another.  Doesn’t sound like much of a bother, but due to the way that queues operate, they can muck up the works if left unhandled.

Azure WebJobs does a decent job of handling poison messages out of the box. Basically, you’re given five tries (by default) to process a given message.  If, for instance, the database you’re trying to write to is unreachable or maybe the instructions embodied in an message were somehow malformed then the message would be automagically moved to a “poison” queue for further processing; but only after five failed attempts.  As to any further handling, it’d be up to you.

Continue reading

I Need a [Web]Jobs

Once again, with the new formed buds of spring a-budding, it only natural for a young man’s thoughts turn to . . . WebJobs.  OK, probably not.  But this (not quite young!) man has a good reason to be thinking of such things: I’m giving a WebJobs presentation at Philly Code Camp 2016.1 later today.  The session will be more than a little hands-on, but I also have a smallish deck, with a number of highlights and resource links, that you can download: Batch Processing with Azure WebJobs.

For the uninitiated, WebJobs can be thought of the newish Azure tech that (most usefully!) enables workflow for the modern web.  Think image processing, shopping carts, database administration, Monte Carlo simulations, app “glue,” process control, AI, custom testing, IoT facilitation, site scraping, backups, pipelining, log ingestion and more.  Even better, WebJobs are super-simple.  You can be up and running in minutes, but more importantly, the “hard” stuff rarely takes more than hours.

If you work with the cloud: run, don’t walk.

 

Koza’s Ant (A Modern Take on the Canonical Genetic Programming Problem)

Koza's Ant EvolverWe’ve all had those less-than-notable-at-the-time yet ultra-significant inflections in our world view that in later days loom large.

I had one of those “moments” in 1993, on an otherwise ordinary fall day when I’d squired my not-yet wife to an unmemorable building on the Northwestern campus, in Chicago.  Marie is a Set and Costume Designer so I have to imagine that we were there for some sort of rehearsal, or maybe a design meeting; something about Orpheus Descending at the Chicago Lyric Opera teases at my memory, although given the remove of 22 years the details have faded.

One thing I vividly remember, though, is reading Steven Levy’s “Artificial Life:  A Report from the Frontier Where Computers Meet Biology;” a book I picked up at the campus bookstore while waiting for Marie to finish whatever she was doing.  She must have been at it for hours because I managed to gulp down something like half of the thickish volume before she emerged from the building.

Continue reading

NoSQL? With DocumentDB, that ain’t quite true!

The Microsoft folks continue to beaver away in PaaS-land, serving up new Platform as a Service offerings on a seemingly weekly basis.  One of the best of these is the new pay-as-you-go DocumentDB service; a fully managed, highly-scalable NoSQL (JSON) document database service that provides:

  • Schema-free storage of arbitrary JSON documents
  • Automatic indexing that supports complex queries
  • Transaction support with ACID semantics
  • Service-side programmability with JavaScript
  • Write-optimized, SSD-backed and tunable via indexing & consistency
  • Open-source Client SDKs for NET, Node.js, JavaScript and Python (with C++ and Java on the way)

DocumentDB should be thought of as a complement to SQL Server, Table Storage, Blobs, etc.; not a replacement.  It’s a pretty new offering, but even so, there’s a fair amount of related documentation around on the net:

If you’re one of my fellow Neudesic-ies, I’ll be giving an in-depth DPG Presentation on the topic on Friday afternoon, at 1:00PM EST.  Please feel free to download my deck or clone the GitHub repository that contains the DpgDocDbDemo code that I’ll be showing in my talk.

Code Camping (Round 2!!)

For the uninitiated, Code Camp is a free development conference that is most typically sponsored by Microsoft then staffed by both local volunteers and up-and-comers on the national / international developer circuit.  The received truth is that they’re not quite as good as the more well-known conferences (i.e. Build, VSLive, etc.) but I have found them in my own experience to be extremely good, especially in terms of learning actual coding techniques as opposed to simply hearing simple announcements.

By way of disclosure, it’s important to note that I will be speaking at the next Code Camp NYC (Saturday, November 23, 2013, from 8:30 AM until 5:00PM), so I am more than a little biased.  My topic will be:

Supercharge your apps with TPL Dataflow

The TPL Dataflow Library allows mere mortals to craft CPU-intensive and I/O-intensive applications that support high throughput and low latency while tightly controlling memory usage.  This code-centric session will explain how TPL Dataflow works (along with a number of related technologies, such as async / await), the advantages of TPL Dataflow over more traditional parallelizing constructs, and most import of all, how to supercharge your own apps.  As drive the power of TPL Dataflow home, I’ll also show you how to write a blazingly fast web-crawler in less than 200 lines of code.

While I’m in a disclosing mood, I just finished the above described web-crawler, called PodFetch.  It is indeed blazingly fast.  On the other hand, I couldn’t stop myself from adding a number of refinements (like colorized logging) so it came out to something like 350 lines of code.  Never fear, though, because though the magic of JIT, the code squeezes down to a mere 59 lines of MSIL, which by at least my count is less than 200! 🙂

You can download the code from GIT.  If you attend Code Camp NYC, be sure to stop by and say hi!

The Internet Of Things

IArduino with PIR Detector attended a sort of corporate symposia last week where my peers and I discussed the current technical state of affairs; most specifically to figure out how we could leverage this rapidly expanding realm of possibility.  One key area of discussion was “The Internet of Things.”  The basic idea is that more and more of the artifacts around us are becoming—at least in some sense—intelligent and interconnected!  Now I’m not necessarily talking about your toaster discoursing on Shakespeare with you, but even as I write that purposefully “absurd” phrase, it occurs to me that it isn’t so very absurd at all.  Everyday objects are being enhanced with more and more capabilities, and I have started to take advantage of this pleasing fact.

Continue reading