Year of The Lemur

If I were an international spy or super villain, I’d want to be known as “The Lemur.” I’d get a lemur, name him Maurice, and train him to discretely take care of my enemies… This post has nothing to do with lemurs. Nor does it have anything to so with my plans to rule the Evil League of Evil (E.L.E). Sorry.

2009 was not a good year for me. I don’t feel the need to go into any details, and all things being equal it certainly doesn’t compare to the year my father passed away. However, as we head into the end of January I’m feeling good about 2010, and wanted to share some of the reasons why.

In 2009 I spent too much time trying to do what I thought was right for others rather than simply what I thought was right. When I tried to fit into the mold others wanted to put me in, I was unhappy and, surprise, failed. I’ve gotten to where I am because I don’t fit the mold. This year is about being my non-conformist self again, with no apologies.

I’m revitalizing my interest in technology and solutions I care about, and continuing my personal journey and exploration of how technology can be used to communicate. I’ll have more to say on this later.

I’m going to be writing more. This is one area where I actually thought I had kept momentum up last year. Then I realized that the majority of writing I did in 2009 was either at the beginning of the year or at the end. My goal this year is to not loose momentum. This also extends into my work with sequential art. In addition to the blog I also have some side projects that marry both technology and writing. I’m actually very excited about this and hopefully will be publishing further information about this in the next month or so.

This year more than any other I feel like the confluence of my skills, knowledge and interests has great potential. It’s this potential confluence that I’d like to spend the rest of the post exploring. When I mentioned my exploration of technology and communication earlier it was a bit myopic. Before I got involved in the technology sector I was a trained stage director and playwright. One of the primary themes I continually touched on was how people communicate, or rather how they don’t communicate.

This theme was fueled by something I see as a part of the human condition. It isn’t just a need to communicate with one another, but also a need to commune with one another, to understand in an unspoken way. This is the same need that spurs us to create art. Now there are some very practical reasons this need is baked into our dna. Humans are social animals, and it is this need to live, work and communicate that allowed us to survive in the face of nature and other predators. Humans are not the only social animals on the planet, not even the most socially centered. The insect world beats us hands down when it comes to structured social behavior. However I think it was the way this need to commune with one another combined together with a capacity for knowledge acquisition and application, which turned our tribes into cities. As our technology takes us beyond our physical communities and borders, we still have a need to bond and communicate. I think we take for granted how much interaction and communication is non-verbal. The saying that a picture is worth a 1000 words is not only true, but I’d bet that as you read it you pulled up a mental picture that demonstrated the concept. When you talk to someone on the phone do you mentally picture that person as they talk? As our communications get briefer how many of us resort to emoticons and lolspeak to help fill in the gaps and communicate mood? We are at a point where we need to look at how we communicate and interact in our new cyber communities. We need to look not only at how to use the tools we have but also to discover new tools to move us forward.

We all traverse along our own journey though our lives whether we choose to recognize and embrace it, or simply go along for the ride. Mine has taken me down a path from art to socio-cultural anthropology (for lack of a better term), and then turned down the path of technology. As an artist I explored how we communicate, which got me interested in human culture and social interactions. As a technologist I have been interested not only in how we interact with our technology, but also how our technology interacts with other bits of technology on our behalf. My hobbyist interest in AI was never been to be able to build sky-net, but to use it as a model to better understand human thought and motivation. I feel that there is an opportunity, and in some ways a need, to bring all of these interests and disciplines together that excites me.

So. With all of this in mind, I hereby declare 2010 the year of The Lemur.

PS: I used the word “confluence” a few times in this post. I feel this word, more than any other, helps describe the themes I’m talking about here. To show how long I’ve been ruminating on some of these topics, when I ran my own theatre company back in the mid 90’s I named it the Wormwood Theatre Confluence. I’ve also owned the domain dataconfluence since 2003, though I’ve never done anything with it.

What do you know? Your nothing but a SME

This is one for the rant files. I can’t begin to tell you how much I hate the term SME, but that’s never stopped me before so here goes…

In order to control a people, you must first control their language. Now I wish I could tell you who said this, if indeed someone has, but alas google has failed me. However this is a theme that crops up not only in science fiction, but also in history, and it’s even going on today. Why do you think people burn books? Why do marketing people invent new terms? Language is a way to express ideas. If you can manipulate the language you can influence the way people think, if not what they think.

What does this have to do with my h8 4 the term SME? In case you don’t know, SME stands for Subject Matter Expert.

Subject Matter Expert: Sounds pretty impressive doesn’t it? You are an Expert in a Subject. People rely on you and your knowledge of the subject.

SME: Pronounced “smee.” Sounds like the cousin of a smurf doesn’t it? If you didn’t know what it stood for, would you like being called smee, or would you be offended? Belittled?

So why would anyone want to call sombody they rely on, by a term that belittles them? Why do high-school jocks pick on geeks in high-schools? Now I’m sure that doctoral thesis’ have been written on this subject by people much more knowledgeable on the topic than I. Someone who studied long hours, and truly cared about the topic. Dare I say an expert in their chosen field of study, or subject. Wouldn’t it be pretty crappy for me to refer to that person as a “smee?” So just my opinion here, but I think that when one person needs to put someone else down in order make themselves feel more important that’s pretty sad. Now don’t get me wrong, I’ll put someone down all day long just for entertainment value (this site is called Malignant Genius, not Nice Puppy Love), but even I wouldn’t put down someone who has knowledge I’m relying on. Torture until bend to my will maybe, but not put down.

Now I’m not sure where the term originated or how it evolved into “smee.” While I’ve seen references to it in six sigma, the first time I came across it was from a project manager. Have no doubt that the way she said it was definitely to make you understand that, expert or not, you were her inferior. Sometimes she’d say “smee” so often in a meeting I thought she was going to ask me to acquire a shrubbery. My problem here is not with the definition or purpose of the original term, it’s with the way I feel it’s been perverted to keep keep my brother nerds enslaved to the frat boys and cheerleaders looking to still be at the top of the social pack.

So the next time someone calls you a “smee” ask to be called a Domain Expert instead. Or I suppose you could always demand a shrubbery of your own.

The Road Less Traveled

In consulting there are always challenges to be faced around staffing projects and managing the bench. While some are difficult to control (the economy for example), others can be worked on, and in truth do not need to be issues at all. I frequently see the following two scenarios, which fall into the latter category.

Scenario One: One region may be overbooked while another is on the bench. The people in the overbooked region are working 60+ hour weeks, and if things don’t change some folks are in serious danger of burnout. Conversely the people on the bench are bored and looking for a good project.

Scenario Two: A developer is staffed on a project because they are “down the hall” and have availability. While another developer may not be down the hall, but is also available and has the required skill set that the first developer doesn’t. While availability may not be a skill, familiarity is simple and comforting.

Are there multiple factors that go into both of these scenarios? Of course, but please allow me a bit of freedom to focus on what I consider to be a major contributing factor. Fear of the unknown. People don’t want to risk introducing unknown factors into their projects. It’s much more comforting to burn the people you know even hotter. It’s still a risk, but at least you know what those risks are. Besides, chances are the consequences will either be someone else’s problem, or at the very least come after the immediate crisis is over. In my experience people tend to quit after the project burning them out is over. Now I can’t really fault them for this. Fear of the unknown is baked into us, and for good reason. The caveman who ventured over the ridge without knowing what’s on the other side seldom returned.

But let’s take this a little farther. What happened to that caveman? Was he mauled by a tiger? Did he fall into a chasm? Die of hunger? In some cases certainly, but in others I think something extraordinary happened. They found greener pastures and new tribes to join. Sometimes they may have come back to their origins, and sometimes they were simply never heard from again. And what happened to those who stayed behind? Did they persevere as the bravest of them left, or did they wither and die, slaves to their own fears?

Ok, so I’m exaggerating and generalizing to make a point, but that point is valid. We’re taught that you’re better off with the devil you know. Allow me to let you in on a little secret. Sometimes the devil you don’t know is much more fun. I’m not advocating jumping blindly into the maw of the abyss. Go in eyes open and be realistic. If you bring on another resource and expect them to dive into the toughest part of your project, you may be setting them and yourself up for failure. Conversely if you keep them on the edges of the project doing small bits of cleanup work, then you marginalize them, never truly integrating them into the project or the team. While you may not be setting up failure here, you also aren’t allowing for yourself to be surprised and delighted. And please, PLEASE, don’t pull in a resource at the end after, your core team has left the project and expect them to save you. That’s called putting the turd in someone else’s pocket, and will end in bad feelings all around.

So if you find yourself in the situation where you can either trod along in quiet desperation, or welcome a new member to your tribe, take a chance. Walk the road less traveled. Hopefully you’ll be surprised and delighted. At the very least you’ll learn something about yourself and someone else.

So. I didn’t quite end up where I was headed at the start. With this post I start a series of ideas and thoughts I have been gathering about the challenges of working in and across groups. Taking a chance is just the first step. In future posts I’ll be exploring some of the challenges you’ll face once you start working outside of your immediate tribe, and explore some tools to help you navigate the road ahead.

Keeping up with the Jones’

A few weeks ago I had lunch with a former colleague that I hadn’t talked to in several years. During the course of our conversation I learned that they are still maintaining vb.6 code and classic ASP in the current generation of the product. Now this isn’t meant to condemn anyone. It happens. The reason I bring it up is that the question of how to keep up to date was raised. He said he’s still learning about some of the features in .NET 2.0 and here we are getting ready for 4.0. So how do you keep up with what’s going on in the here and now when day in, day out you are working and thinking about ten year old technologies?

Here are a few suggestions I made to him and I pass them along in the hopes that they might spark some ideas in someone else.

Work on personal side projects. It doesn’t matter if you ever complete the project. It matters that it be something your interested in. Build it with a technology you’ve never used before. If you can complete it and get it out in the world then all the better, but don’t get hung up on having to deliver something, you may never get started. The world is full of deadlines and commitments, have fun and learn something.

Re-think what your currently doing and about how you might re-architect it. You may be working with older technology out of necessity, but there is nothing stopping you from taking notes about how you could update things. This can also be the basis of a modernization strategy for your company. Not a bad way to help your career either.

Read. Not just technical books either. Read about trends in your industry. If you’re in a situation where you need to seriously up-level your skills, you may have several choices as to where you go. For example if your only getting started with .Net for VB or classic ASP why not learn C#. Interested in mobile devices? How about a little objective-C? The point is there may be trends in your industry or specialty that help determine where you go.

Read technical blogs. This can be a good way to keep up with what’s coming down the pipe, and give you ideas for personal projects. It can also be good if you only have limited time. Be careful here. It can be addictive, and also a bit overwhelming. If you find you have too may entries to keep track of, think about cutting back on a few. I have a few that I skim and then mark them all as read, and I have others that I try to read every entry. Be realistic as to what you can read, and be consistent with staying on top of it. That might mean checking in once a week or once a month. Find out what’s right for you and stay with it.

While this isn’t by any means an exhaustive list, I hope it gives you some ideas, and that you come up with even better ones you can share.

Faster SQL calls with index covering

As sites and applications begin to scale on the data side of the equation, you can quickly find your database to be a bottle neck to performance. Index covering is one technique that can help speed up your queries dramatically.

Technique: Index Covering
Applies To: SQL Server, MySQL, Sybase (Possibly also Oracle, DB2)
When to use it: Thin queries (limited columns) that do not operate on the clustered index.

So what is index covering? In SQL Server when you perform a query on a table that returns only columns contained in a non-clustered index, then SQL will return the values form that index, thus bypassing reads to the actual data pages. This is called Index Covering, and can greatly reduce the IO needed to perform the query.

Let’s say you want to make a call to your database that returns all of your customer names when passed in a particular state. The clustered index on the Customer table is CustomerID.

Select FirstName, LastName From Customer Where State = ‘Oregon’;

SQL Will scan through all of the data pages looking for all of records where ‘Oregon’ is the state. If you have large data rows this can be fairly costly in terms of IO. The field State is a horrible candidate for an index as there are only a finite number of distinct values that can be placed in the field. So what’s a princess to do when her turing machine slows to a veritable crawl? Index covering.

Create Index cust_state On Customer(State, LastName, FirstName);

Since all of the fields needed for the query are now contained in the index SQL will go to the index, find all rows where ‘Oregon’ is the state, and then, seeing that all of the other columns it needs are in the index, it will return the results. In this scenario SQL will never touch the actual data pages. Because the size of the index pages are significantly smaller than those of the data pages, SQL uses less IO. Now place your clustered index in a different filegroup and your performance gain can be even greater. But that’s for a different article. Also note that our covered index would also work with the following query.

Select State, FirstName, LastName From Customer Group By State;

“I’m confused. You said earlier that State was a bad candidate for an index?!” Under normal circumstances it is. Good index practice would tell us to order our index columns as so (LastName, FirstName, State), with the most unique (selective) values first. Remember that when working with index covering we are creating an index for a special purpose. When dealing with large scale databases you need to start thinking differently about your data.

Large scale data systems like the ones I’ve been thinking about lately can seem more like data warehouses than OLTP systems. In some cases they are both at the same time. I think the biggest mistake a developer or architect can make is to become rigid in their thinking about how data needs to be defined and structured.

PS: For those of you who got the veiled reference to Diamond Age. Gr@75!

Performant Scalable Thoughts

I’ve been filling my brain with thoughts of online community building and involving fans more in how you define your brand. I’ve been reading what I can in the space, whether it’s blogs or books trying to form a clear picture of where we are headed. Not just as an industry, but also as a connected society. For those of you paying attention, that’s right I said “fans” and not customers. If you aren’t thinking about your customers as fans, then you probably also think that this whole internet thing is just a fad.
So what does this have to do with Scalability? Lots! For those of you who don’t already know, it’s my job to translate what our marketing and design teams come up with into deliverable solutions for clients. As a digital agency this mostly consist of ways to communicate with brand fans, and a campaign that takes off can need to scale-up quickly to handle large amounts of users, then scale back down over time. At the same time you don’t want to spend an exorbitant amount of time (and client money) building a monolithic site that can infinitely scale if you are only expecting a few thousand users going to it. Don’t get me wrong. A few thousand well targeted users who are fanatic (there’s that word again) about your brand, can be a higher measure of success than 20 times that number who visit your site once, and never give you another thought. But what if your site takes off virally? Would you have to throw out your existing code and start over, or would you have all the pieces in place from the start that will help you scale?

While not every site needs to scale like facebook or twitter, there are several things that can be done with no impact to existing budgets that will help with scalability down the road and performance immediately. This post is the start of a thread on the subject of scalability and performance. While some posts will be short snippets of practical advice that any site can take advantage of, others will be targeted to specific problems involved in scaling a system.

So here is the first tidbit of information that can be used in any web site. It involves client side browser performance.

Most browser cache is case sensitive as it does a straight string compare of a named resource. Therefore if include a reference to an image file on a web page like so

<img src=”Image.jpg” />
it will download the file and place it in the browser cache. If you later reference the same image on the same or another page but change the case in the source url as such

<img src=”image.JPG” />

the browser will download the file from the server again as it perceives it as a different file/url, and create a second entry in the browser cache.

This isn’t a problem if your web server acts in the same manor as Apache where all urls are case sensitive to begin with. In this case the server will not recognize a resource unless the url is presented in the same case as the file on the server. Because IIS isn’t case sensitive a browser could potentially download a file hundreds of times if the case in the source url is slightly different every time.