This week has been an eventful week…a lot going on at work, friends facing different journeys, personal things. Through it, one Bible verse stood out in particular:
“For I know the plans I have for you,” says the Lord. “They are plans for good1 and not for disaster, to give you a future and a hope.” Jeremiah 29:11 (NLT)
God does not say that he’ll ever necessarily tell you what that plan is or what the definition of prosperity is, but he promises good, a future, and hope. Faith is just that…it’s knowing that HIS definition of good and prosperity is sufficient, and to not let drought, famine, captivity, exile, or banishment (think about it – those are definitely worse situations than most of us will ever face) will keep him from insuring that the plan is fulfilled.
1In the NIV, this reads “prosper” which has a slightly different interpretability but still the same emphasis.
I am putting this right up front – the tenor of this blog is about to change. About 7 months ago, my beautiful wife and the love of my earthly life authored a truly emotional blog post about her past and her faith. I was blessed to have not only witnessed the journey she’d taken to get there but to watch the incredible outcome.
She was not the only one to go through a dramatic change in her life. Since my re-birth just shy of three years ago, I’ve gone through a journey of change, a transformation from the “grumpy designer-developer-teacher” (my old tagline) into the man I am today. This man is one who fully believes that God is in control of his life, that Jesus died to pay for sins I could not pay for, and that the Bible is the word of God breathed into written form.
So my posts will evolve. Through my journey, which was full of difficulties and struggles but has brought me to the transformed state I am today, I neglected to share the effects it’s had on my personal life, my emotions, my career, my family – and now it’s time.
This is a post based purely on conjecture and anecdotal evidence so take it for what it is.
Earlier, a colleague sent me this article – The Culture of Long Agency Hours – and frankly, I think all of us have been there (and this doesn’t just mean white collar and agency jobs, though it is particularly pointed at salaried staffers). More nights than I care to remember leaving the office and rushing to catch the last train out.
I played that game through my budding career in the hotel business where spending entire weekends as manager-on-duty was the norm, through the beginnings of my career in tech where bleary-eyed, Mountain Dew-driven nights were a right of passage. For some reason, my naive head was all-too-borne on the idea that the more hours I worked, the more productive I would be, and that the more productive I was, the happier my bosses would be, and that that would lead to some great compensation at the end.
But along the way, it took its toll. Right off the bat, the time I spent with family and friends waned. Like most, I tried to pretend that drinking at the end of a late-night coding session counted as “friend time” but in fact, it was just an extension of work to try and shore up the buddy-buddy relationships that had replaced my real family. But more than that, the quality of work also took a hit. Management wants you to work late because they think it gets more done. We begin to believe it. We begin to think that the sub-par work produced reaching our 12th hour is a substitute for optimal, energetically driven work when our exterior lives are happier. As Marshall says in his article,
Too often at agencies, it’s those staffers who work the longest hours that are deemed the most committed to the cause. The culture is one of bragging rights, in which she who logs the most time in the office is often deemed most important to the organization, despite the fact she might simply be inefficient. The system often rewards quantity over quality. And it trickles down. Account people are often expected to stay around late with creatives — out of solidarity.
The inconvenient truth is, this grueling work schedule is a choice, and a bad one at that. In short, ad industry execs work long hours because that’s what industry execs do, not necessarily because it’s the best way to operate.
It’s a proverbial carrot at the end of the stick and at age 40, I decided to change that.
Having reached a point in my career that I could comfortably push back against executive expectations by providing metric-based results, I drew a line in the sand and stopped working late. Just stopped. And I didn’t lie about it either. I was just tired, I said. Tired of trying to solve complex code problems while my head was distracted with anxiety about my family, tiredness, clouded judgment.
I was fortunate. At the time, my bosses (in not one, but two companies) were amenable to the idea. This didn’t mean that there weren’t nights spent working, and it certainly didn’t mean that I skipped taking work home sometimes. What it did mean is changing the regularity and changing the expectation. Work time is for work. Personal time is for you (whatever that happens to be).
But once the idea started to root, it had an interesting turn of events. Co-workers started jumping on the bandwagon. And even more interestingly, the quality and speed of our work improved as a whole. It became something of a game – see if we can produce spectacular work at breakneck speed. We didn’t cut corners, we didn’t skimp on the product. What we did instead was be acutely aware of our time and focus. Meetings were shortened to succinctness by avoiding banter. Codetime was focused by using frequent code reviews and lots of unit testing. Design specs were complete but allowed for room to improve, and developers were given leeway to make those decisions without repercussion.
And gradually, something crazy happened. To be fair, all of us were early risers (kind of an unusual trait for tech people, but you’d be surprised), but we started going in early and pushing back on the time we left. The condition was that we start the week with a plan and daily provide the measurable results of our work before leaving. So we took steps to insure that we made that goal, and then one step further. Pretty soon, our workdays were suddenly 6:30am to 3pm. Suddenly we had a good chunk of afternoon to live life.
But the best result was, we were all happier. Much happier. And this led to the best change – management realizing that happy employees produce better work. Once this was realized, the snowball gained speed and even management jumped into the game.
Now, I just moved back to NYC and, well, the atmosphere is back in the “long hours” syndrome. Right from the bat, I made my personal requirements known and stick to it because of family needs. But with it, I had to make sure that my own output warranted my ability to do it. Management is more amenable to quality of output when you can measurably demonstrate that you are able to do just that, and that there is a likely drop off in quality and commitment once you reach the late nights. Again, it’s not to say that there aren’t going to be instances that staying late isn’t necessary – being around your colleagues while in the midst of a project is a positive – but make sure it doesn’t become YOUR norm…that will simply translate into an expectation.
The real question is, can you make the effort to make the change? If not, quit bitching and get back to work. I’ll remember you while I am at home enjoying the rest of my day while you’re still slaving at your desk.
This is actually just a test. I’m having a rough day. Right now I’m running an experiment on why files are not playing on devices but are in iTunes when using podPress.
Update: In case you’re wondering what this was all about…I was working on a problem that  was having with podcasts being able to download in iTunes but failing in devices. After some experimentation and research, we’ve finally determined that the problem is due to a configuration at the server level in which the host was not accepting byte-range requests. A byte-range request is a method that allows the requester to ask for a specific segment of a download rather than the whole. Because of the nature of podcasting, this is a requirement of iTunes. Apparently at some point, the config was disabled on the server.
Anyway, the symptom of this problem is files not downloading to an iPhone/iPod/iPad directly (gives you an error) but does download in iTunes on a desktop. It is not a problem of a podcast publishing plugin, such as podPress or Blurbby, though apparently it can be problematic if you use the media uploader in WordPress to put the files on a server (use FTP instead). Anyway, hope that if you’re having this problem it helps.
I love to shop for groceries. I know, sounds weird, but I do. There is something incredibly “relaxing” (more on that in a moment) about just browsing shelves and shelves of pretty boxes and imagining the incredible dishes one might concoct after leaving the market. For those of you that think I’m a little strange (particularly women), is there any difference between that and shoe shopping? Basically, aside from the product in question, the shopping experience is universal given the right items.
I began to question what it is that I love so much about it. OK, yes, I like to cook. In fact, my other favorite thing to do is to cook for my wife and try to astound her with each meal. So it’s the result of my shopping that leads to cooking that leads to eating that leads to a smile on her face. Long way to get to a goal isn’t it? And an awful lot of effort just to end with a smile. So there must be more to it than that. I started to dissect each part of the process in an attempt to figure out why I would bother to go through such extremes.
I live on the Upper East Side in Manhattan, but I haven’t always. Grocery shopping in Manhattan can be exhilirating and frustrating all at once. On the one hand, I have the sheer fortune of being near Fairway Market on 86th Street. Fairway, billed as “a store like no other” is an incredible practical shopping oasis for the neighborhood. Yes, to be sure, there are many incredible food shopping venues around – Chelsea Market, Union Square Market, several good Whole Foods – but purely on price and breadth of product, Fairway has everyone beat hands down. So basically, I save money, and I get a lot of foods in a very small space.
The experience of shopping at Fairway is something in and of itself that is hard to describe other than “ordered pandemonium.” Much like the bustling city sidewalks, the aisles are literally full from end-to-end with people, carts, children, stockers. All stopping, checking, reaching, bending, lifting and moving together in concert, yet aimlessly disconnected.
The summation of my meandering thought is this…it is the entire user experience that is called into question here, and particularly because it is a long chain of events, it can’t be viewed as a singular thing. In fact, it has to be viewed as a series of interrelated, intertwined user experiences.
Package design is much like web design. Grocery shopping is much like surfing.
Fairway is a local supermarket, billed as “a market like no other.” For weeks they were advertising a product – “Figs and Satongo Chocolate” – and for weeks my wife spent hunting the shelves for it. And when she found it she bought six jars of it.
Product placement is SEO. Yes, the big companies will swallow up the smaller ones simply because they can spend more to “advertise” their value proposition. But every once in a while, a product will break out of the mold.
But then, so is the Web. In the ether, our requests and responses traverse the digital aisles with incredible precision, relatively low rates of error, and manage to make it to the checkout counter unscathed. This whole exercise is user experience in and of itself, and as a digital society we’ve highly atuned ourselves to the entire process – how fast is a site on desktop versus mobile, how does RWD improve or degrade the content delivery, how does browser rendering mess with design? And does it really matter in the end? At the end of the day, do you remember the pixels that make up the sites you visit or do you remember the content inside it? Do you remember the shades of colors in the background or the overall experience of the visit? Do you keep the box your pasta came in or do you discard it?
The point is that user experience over time persists, but design in general does not. It doesn’t mean we hate the design, but maybe we are inundated with too much of it. I’m a weird guy – I love to grocery shop. I love the experience of it, I love combing the aisles for little food finds, yet I actually hate surfing the Web. At one point a few years back, the rocket-rise of Flickr was attributed to a phenomenon called serendipitous browsing and if anything, that’s what my food shopping experience is like (I can’t say that for my regular shopping, which I can do without altogether). But I still can’t do it on the Web. I get bored.
I read this article by Jason Cohen focused on what he terms as “successful unsustainability” after the (recent) failure of Color following a $41 million VC round. I was interested largely because of my first roots in the industry and after reading it suddenly realized I’d never written about what happened.
In 1993, I joined a new, fledging company called Visual Radio after leaving a successful career start in the hotel industry and burning out (I was an accountant and it turned out to be so boring and demanding). Originally I was just there to do the books but through a series of events and by the pure chance that I knew how to code Perl, I eventually moved into technology.
Circa 1994 and the “chief engineer” (back then it was basically the head geek, there really wasn’t much in the way of formal titles) was abuzz about this thing called “the Web.” Our original business model had involved a subscriber-based system that married a CD-ROM with video, audio and images to a BBS-run system using a proprietary software we dubbed 2Hact. By this point, 2 years in, the model had produced a lot of buzz and very little cashflow. It was time for a change.
So we started peeking into the idea of this Web thing. We did a few web sites – not a real big deal, except that we had behind us a bunch of loose one-off projects that eventually became something – one of the first credit card gateways (using an x.25 back-end dialer system), some very early dynamically-driven sites (using both flat files and early RDBMS), streaming (yea boys, read that – streaming circa 1997 using license #1 from Real), and agent-based COM libraries. For a “web shop” (as we were calling ourselves by 1995) we were dabbling in an awful wide array of things.
Deals started to roll in fast. By 1997 we’d done some really killer work. In other words, we were every other pre-bubble dot-com. For example, we’d helped design and develop bizTravel – an early piece of the Expedia puzzle. We’d built an early streaming video system with moderated chat – more or less a pay-per-view version of pay-per-minute 800 lines. And we’d expanded on our credit card systems such that by 1996 we had built several gateways into the few banks that were early pioneers.
The team was unusual – but then again, so were most dot-com shops of the day. Since there were few CompSci guys and even fewer books (and no extensive Google), most of us had to learn to code “the hard way” – trial and error. Our designer was a former architect. Our complete lack of regard for any of the limitations of technology or design was what got us places.
But like most other places, by 1997 there was a sense of invincibility and unlimited growth. We were making money. Lots of money. Deals were all around us. We decided to get involved with two (now defunct) companies into a new technology – DSL. We made the wrong bet.
It’s a shame to watch a company you’d help foster fall apart, especially when you were the one who’d voiced concern about changing business models mid-stream. I’m not saying the company would have survived had it remained in its core – nearly every other dot-com died by 1999 – but to have become unsustainable in the way it did was bad. I was fortunate – I left before all that happened. But it did affect my team, and it was hard to watch.
In retrospect, at the very least, it made me more cautious. While I still have a love for the thrill of being in startups, I don’t go after the dreamers but instead after something that there is a definitive trend in. Success in a technology business is never predicated by the ability to attract venture capital, nor the lack of ignorance into the long term trends of technology. In his article, Cohen mentions “lasting value” – sustainability – probably one of the best comments I’ve heard this week.
If you are not an entrepreneur, which most of never will be, this is still important because the company you work with needs to have some of that built in. Yes in the tech world, jobs are relatively easy to come by but never count your chickens before they hatch (sorry for the cliché). As I’ve gotten older, security has become increasingly a factor of employment, right behind enjoyment and maybe even ahead of growth opportunity. Anyway, look for it – stop going after the flash in the pan because it’s highly unlikely that your ESOP will ever pan out.
I’ve deliberately left names out here. Most of us have moved on and only a handful are still in the technology business. Any of you I haven’t kept up with that reads this, please feel free to get in touch.
Much has been written about innovation and disruption around the tech world lately. I think the terms are pretty clear at this point though there also seems to be a lot of poking around whether one precedes the other or whether they can be mutually exclusive. I was particularly moved by a recent post on a favorite blog – UXMag – about this topic. However, I’m going to take a bit of a different stab at the topic.
This is a personal post about changes in my life and how I now see innovation and disruption. I’ve been in tech for nearly 18 years – in the front lines developing both interfaces and architecture, in the back strategizing, in front of university students teaching, and more recently entrenched in user experience and product management – so I can truly say that I have watched the industry rise and fall, and more than a few times.
I can also truly say that the person I am today is vastly different than it was 18 years ago. Aside from personal sways, aging, and just the pure changes in the industry itself, there was one major event that changed me – coming to faith in Jesus. Now before you write off this post as just a big testimony argument, I urge you to read and understand where I’m coming from.
My coming to Jesus was not exactly a run towards but more of a grudging acceptance. My wife, encouraged by a personal pull from God towards him, convinced me to tag along. Eventually, as I began to notice more and more the little things that were happening both around me and to me, it became more apparent what was at play and where it was coming from.
The innovation occurred when we left Las Vegas. Lets say that while there are certainly pockets of good technological achievement in LV, it’s not exactly a bastion of innovation. Back in NYC, maybe it’s the atmosphere, maybe it’s the attitude but either way there is definitely a bigger desire to change and look at things with different eyes. I’ve been in this business for over 18 years now and I am one of the few that can say I’ve truly seen pretty much everything. But I was wrong. New things, new evolution in technology, growth of new ideas and especially the shift in data and design has both sparked new thoughts about approach, but likewise reinforced intention to improve what exists.
The disruption occurred when there is a realization that all this time, God has been leading me patiently down a path towards something. What that something is I don’t fully know yet but I am letting Him take the wheel and drive that. In doing so I am seeing things differently than before, not just the standard Christian life stuff but even how I view my own work and what I do. Every day I realize how little I am in control and that He guides my every move – sometimes at the moment it doesn’t seem like it but look at the wide angle and I see the connections and that He puts me where I need to be at the right time, and provides for me in ways I didn’t even know I needed.
I am blessed to have a fantastic and wonderful wife, a great church family, a job I enjoy, and most of all God to guide me. Proverbs 3:6 says “Be thankful, and in all your ways acknowledge him and He will make your paths straight.” So true.
Advertiser 3D by adMarketplace was designed to help you manage your search network advertising campaigns efficiently and easily. It shows you what happened in your campaigns, why it happened, and provides customized suggestions based on your individual performance goals.
Critical areas of the interface, like your key performance metrics, are bold, colorized and placed in the center of your dashboard so you can quickly get the information you need. Below this is the supporting data grid, complete with inline editing and pivot-style navigation so you can easily find and tackle underperforming areas in your campaigns and focus your efforts and dollars on highest performing keywords and traffic sources.
Unlike other search networks, adMarketplace provides advertisers with complete traffic source performance data so you can zero in on the optimizations that produce the biggest impact. And 3D is the first platform that allows you to bid on keywords by traffic sources to drive your search performance. Why run a high cost-per-action on a relevant keyword when you could eliminate a bad traffic source and concentrate your spend on traffic sources that perform well for that keyword?
Above your key metrics and data grid, you’ll find new charts and apps. These quick and intelligent mini-interfaces provide reporting and updating options and even use your own performance data to suggest optimizations to improve performance or increase traffic across your campaigns. You can even review and reverse any optimization actions you’ve taken that didn’t work well by tracking your previous changes.
Advertiser 3D continues to evolve. New features are already in development. Get started today with the simple, transparent control you need to achieve your performance goals.
Advertiser 3D from adMarketplace – Finally, search network advertising the way it should be.
For the past 8 months I’ve been leading the development of Advertiser 3D – a system for managing performance-based search syndication ad campaigns. From a user experience perspective, the project was a nightmare from the beginning – the challenges that the new architecture created were enormous simply because there was no precedent for how to do it. In a happy-happy theoretical world, we like to say that this type of challenge is what we all look for – the chance to innovate and be disruptive, but the reality is that it also creates enormous headaches.
Advertiser is one of three core interface-based software products within the adMarketplace environment. Specifically, it provides online ad campaign management for advertisers within the network. Advertiser 2 provided all the basic needs but suffered from several drawbacks – most notably a piecemeal interface that lacked both consistency and continuity, and an antiquated traffic source management system based on “bucketing” (a means of adding or removing sources based on a scoring system that indicated confidence in ability to provide converting clicks).
adMarketplace rightfully touts itself as “we are big data.” We take in a terabyte of click data every hour. Sure, we don’t add up to Google but we don’t intend to. What makes aMP special is what it does with that data. For years, the executives at adMarketplace knew that the system and the data was good, and more importantly that there was great data behind it, but that a better system was needed…a new form of visualizing and managing it. Thus, 3D was born.
I joined the team at the start of the year. Frankly, if I had known then what I know now, I probably would have been too daunted to take on the challenge (which for someone like me who likes complex challenges, is a difficult admission). So in this case, a little bit of blind ignorance was probably a good thing.
We were behind the eight ball from the start. We knowingly started the project without having all our specs in place, without having fully vetted the customer needs (which to be fair, was largely our own staff), and short of human resources (by our own admission, at least one full scrum team). To add to the mix, by the end, we’d also lost twenty percent of the team, wasted an incredible number of iteration cycles that normal design processes would have prevented, and gone through the normal tug-of-war on feature versus bug throughout the RC phase.
Month 1 – having just joined the company and without feet fully wet, my first decision was to fire the outsourced UX team. Despite the relatively decent UX process they’d conducted, they’d also completely missed the mark in terms of what the product was. Strange that within a week I was already blatantly aware that the true innovation of the product was one thing, yet didn’t find it remotely discussed in any form in their assessments. So they had to go. What did get salvaged was a reasonable slice of UI components that eventually made it into the current interface.
Month 2 – struggling with product versus development. Having been a developer in the trenches for years, I completely understood where the team was coming from. Nonetheless – this was not the time for routines and processes on a continuous basis. Yes, code reviews and refactoring and modularizing are a fact of life, but they can also hold creativity and imagination back (as we discovered month 9). So development was painfully slow. On the one side, executives were anxious to see progress. On the other side, developers wanted their code to flow like Shakespeare.
On the product side, we were still struggling with the innovation part. At this point, I suppose I should introduce it. The innovation Advertiser 3D brings to the table is traffic source management. OK, it’s not totally an innovation but the fact that it will be to the extreme that it is and first to market in its form will make it a disruptor. Google already offers traffic source management, but only on one segment (AOL) and only to a small degree. Here we’re talking about complete bid and cap management across any and all traffic sources, and against keyword matches. This is key. Currently ad networks only allow you one simple measure to attempt to increase yield – keyword bidding. With traffic source management, you can now bid up or down, or cap, on source performance for a keyword, or keyword performance on a source (yes, two different angles). This means you can individually select the best keyword-source combinations, which in turn creates improved yield, lower CPA, increased CTR, etc.
Months 3-5 – lots of development, and a lot of struggles to come to grips with the traffic source conundrum. Turns out that what sounds good on paper is not so easy to put into practice. At one point, the specs called for extensive badging – the ability to provide levels of whitelisting or blacklisting against sources. Once put into practice, though, it became too much. Even our own team, with all its experience in traffic source management, had difficulty keeping control. Hierarchies of sources (imagine a network site with a bunch of smaller sites) were errantly affected. So we tore it out.
The design started to form itself – sometimes with extensive discussion and sometimes with off-the-hip decisions. It’s also at this point that terming the project Advertiser 3D rather than Advertiser 3.0 was made in the realization that one of the single biggest advantages of the interface was the ability to look at campaigns from multiple angles – pivoting for you spreadsheet lovers. In other words, looking at your campaigns in 3D.
Month 6. Month 6 in an eight month project is something akin to mile 20 in a marathon. It’s when all the developers are so tired that they no longer really care, when sales and ops is wondering if the product will ever see light of day, when executives begin to wonder if we can pull it off. Month 7 dilemmas were exacerbated by the loss of human resources. Some of the team decided they’d had enough. The danger here is the negative effect on morale of the rest of the team. So we countered. We countered by instituting an incentive program for the remaining team. Guess what – it worked (see Month 7).
The other big deal in month 6 was the realization that some of, no, most of the team actually had no idea what we were building. In our zeal to get the project in motion we’d, I’d, failed to do one of the most important things – get buy off from the team itself and get us all on the same page. Face it, it’s not like we were Apple coming out with the next big thing and trying to keep it under wraps – if anything we were the opposite. We had a great idea in the works for years, we knew it worked, we just needed to rebuild the infrastructure and the interface to make it happen.
This is the point at which Product Managers are made or broken. Knowing when to push or pull, to play the bad guy to both sides and yet the hero to all is a bit of an art.
Month 7 the project goes into overdrive. At this point, our frontend lead had to bail (family emergency) but in his absence something great happened – the team learned to lead themselves. They began to look for creative solutions to problems in ways they hadn’t, and they learned to work synergistically across scrum teams.
Month 8 – testing. OK, to be fair, we’d been testing all along but this is where the “rubber meets the road” (thank you Mr. Yudin for your quips) – complete QA and UAT on a viable, complete interface against real data. So here we are, eight months in, still a fair number of minor bugs to clean up, but looking at a true innovation in ad campaign management. The real test, of course, will be the market, but we’ve gone a long way. During training we learned that, along with the developers, the Ops team was completely unclear how to approach traffic source management – but that was quickly and easily solved. However, it also shows that we have a big obstacle in helping our external users comprehend the advantage. Fortunately most of the savvy customers already understand the reason, though maybe not the method.
I’ve been in software for 18 years, specifically in UX and Product Management for the last 10 of them. But I’ve learned a lot of lessons in this go around:
Lesson #1: Make sure your dev team knows what you’re building. Sounds silly but true
Lesson #2: Get buy off from Ops and Sales. If they won’t use it, neither will customers.
Lesson #3: Accept criticism. Tough to be humble sometimes but do it and learn.
Lesson #4: Have fun. Cupcakes, napkin doodles, and crazy ideas go far in the middle of an eight month SDLC
Lesson #5: Let the devs get creative. The journey to find a code solution can lead to better user experience because at one point, they will think like a user.
The public launch of 3D is imminent, and we already have at least 20 significant upgrades slated for the next two releases. Like most software, this is an evolving product and its perfection is only a matter of perspective, but we have no intention of quitting trying to improve it. For now, however, I am taking a breath to retrospect on the lessons I’ve learned and formulate my next attack – this time with a little bit less blind ignorance, and a helluva lot more data to work with.
I’ve been on this project for the last 5 1/2 months and we’ve made leaps and bounds but in the past weeks, boss man has been harping on the details – those small knick-knacks that make a good UI great. Everyone in any technology development has been at this point – where deciding between 80% and 77% opacity makes a difference, whether the 10,000 foot scope makes sense at the 18 inch level.
Today he pointed me (because my official role as product manager was specifically mentioned in it) to an article that, for better or worse, articulated the point of these trivialities. While not specifically targeted at techdev, it makes a pretty good case for one thing – convenience.
I thought about it quite a bit tonight. Convenience. Convenience is the reason we have so many big box retail stores and 7-11s, why we have smartphones that let us call, tweet and watch movies all at once, why we eat so much fast food (though to be fair, it’s fast service food, not fast food). Convenience has permeated our culture, and as debated by its author Kent Goldman, the very thing that drives our ability to engage.
“Convenience, not beauty , not finely tuned control, drives engagement.”
So people’s need to be engaged – to feel a part of or to be involved – is defined by whether or not it is convenient to them and how we (product managers) facilitate that can make or break the product. Nice. Not necessarily the way we actually compose UX but nonetheless pretty dang important point. More succinctly:
“Every product manager needs to work through the calculus of engagement. The effort required for a customer to engage with a product, has to be lower than than the value they drive from it.”
So think before you add. Is that feature necessary? Does the glossy improve the ease of use? I constantly use the ATM example but here it’s appropriate. I’ve used BofA for quite some time. I’ve always hated the ATMs. There are so many buttons and despite using the ATM on a near daily basis, I still have to hunt for the one I want (Get Cash). I recently switched to Chase. Guess what? No difference. 98% of my ATM use is getting cash. 2% is depositing. Why is the Get Stamps button the same size as the Get Cash button? Why does the interface even have 12 buttons in the first place? Strip it down. Make it big. Give me some satisfaction that I can find what I am looking for quickly. Make it convenient.
“…I’m less likely to ask what their company will allow users to do and more likely to ask what their product empower users to no longer do.”
I was a developer for a long time. I know that after I just spent months working on some very-minute-but-very-cool feature, the last thing I want to happen is to trim it off. Trust me, I have empathy. But I am the developer, not the customer. And I am certainly not the one paying the bills. My customer is. Let the customer get what he wants.
Read “The Convenience Gap” by Kent Goldman
I read dozens of articles weekly on UX, usability, and the process of designing for the user. As the prominence of UX in technology design has increased over the last several years, one notable area that is rarely touched on is the design of interfaces that deal with enormous quantities of data. In my new job, this is the very conundrum I am faced with and despite years of experience in the field I found myself quite ill-prepared for the task.
The rundown goes like this…the business of the company is big data. The data – queries, search results, clicks, etc – is aggregated into a series of databases. The data is then utilized to pair supply and demand – the supply of search result web pages to the demand of web advertisers. This balance is created by an huge number of algorithms that span and massage the data to provide the best results for both parties (and of course, the company itself). [visual overview here]
The best results occur when a query term (the keyword that the user enters into a search page) lands them on a web page in which a specific advertisement is matched because of a keyword pairing. Simple enough. The complexity begins when the black-and-white distinctions end and variations enter the equation. For example, within pairing, or matching, a term can be exactly or broadly matched – meaning the keyword and query term are identical, or they are loosely similar – and the broadness can have levels of disparity. For each of these matchings, the advertiser “bids” – they provide a price they are willing to pay to get a search result page placement that ultimately leads to a click and eventually a conversion.
This is simple enough and there are lots of companies doing it. Google, Facebook, Yahoo! are all well known participants and in the end, all of us in the SEM business share a lot of common thread. Where the distinction of my employer lies is in the release of a virtually untapped area – traffic sources. Traffic sources are where the users come from. In other words, at a granular level, the sites or pages that a specific ad is placed on.
Up until now, no one has really allowed the advertiser to take control over managing traffic sources. All that is about to change. The release of the software I’ve been working on has this capability. The specification behind the data algorithm is the culmination of 5 years of work and study by the CTO and when the conceptual framework was presented, it both amazed and frightened us.
OK, so it took a bit to set the stage, but here’s the gist of where I was heading. It took that long just to explain the very tip of what the business is, now try to imagine creating a user experience for it. Basically the team was tasked with creating an interface and experience that allowed users to do something they’ve never done, with data they’ve never seen in ways that they ultimately could not ascertain would benefit them. And at the same time, we needed to present gobs of numeric data in a meaningful and impactful way, but provide the right number of tools that give the user researching capabilities, analytics, and tools to change.
So where does one begin? Where this project took a turn from more traditional UX approaches is the weight of the information design in the overall scope, followed by the necessity to provide usable, comprehensive tools to make changes. In approaching the design dilemma, we looked at the deficiencies of the current implementation and realized that the problem revolved around the presentation of data. Initially we’d assumed that the use of spreadsheet-style data grids was the problem, but it was actually that each grid lived in a vacuum and bore no relativity to the other data. Furthermore, the grids served little or no value outside of researchability because few actions could be taken against it.
[sooner or later there will be images here
but at this point I can't actually show them
so you'll have to wait]
Borrowing from more traditional spreadsheet applications like Excel, first we turned the grids into cubes – by allowing to drill into increasingly granular data through any chosen path without predicating the direction, the user can look at data from different angles (hence the name to be revealed at a later date). Further, we added a pivot-like methods so users could take a specific slice of data and view it from two specific angles in a comparative-like scenario.
Next, we made the grids usable. Again borrowing from traditional spreadsheeting, any cell containing editable data was made just that – editable inline (and to be sure, we threw in multi-row bulk edits, reversibility, and key controls to make it responsive and fast). Finally, we added tools that will eventually allow users to run scenario analysis across a number of factors – geographic location, traffic source performance, general KPI trend, dayparts and flight dates, and
And this is just v1. Wait til you see what’s on the board for v2.
The complexity of the project was exacerbated by one thing – difficulty in seeing past the traditional. Once we got past that, the UX came to us easily. Sure there were knick-knack problems – technological hurdles in large data delivery, problems with user cognition of actions, visual design obstacles, even color as a control factor – but overall, dilemmas suddenly became solutions.
Here’s the tiff – most focus on UX nowadays has one specific golden rule – design the experience to engage the user while simplifying the problem. UX articles tend to focus on a microscopic portion of the larger problem and hence don’t always create flow from one action to another.
One site I truly love is MailChimp, but even there I still have complaints. For example, the flow of the campaign creation is beautiful, but the implementation of the editing is clunky. Why does each section have to be it’s own full-page modal instead of inline editing? And why does the media composition have to use yet another modal? Finally – I am all for big inputs and type but use it consistently (not to mention that honestly, sometimes it just makes the feel of the interface very primary, which makes me feel, well, stupid).
So how does one continue to solve each individual problem and the global scope?
Incidentally, as an added twist, management provided several edicts at the outset of the project:
To that end, there’s apps. Apps in the interface are small widgets that perform microscopic analysis or functions within a small frame interface (think mobile) using less-than-traditional methods (mind-mapped keywords, etc). Apps are not a new thing but they solve a host of problems by nutshelling a small range of need into a compact container.
I’m certainly not saying this application is the end-all-be-all of UIs or UXs, but in order to provide the best interface and experience to the user, we had to continuously look at the macro and micro all at once and consider the ramifications downstream as well as upstream.
Partly because I “grew up” in the early days of the Web (that means pre-1998 for those who are asking), I have a keen affinity for the seriously bad trend of web pages and applications becoming larger and larger at a screaming fast pace with seemingly no regard for bandwidth consumption. There was a point when consumers began to complain that desktop applications were becoming too bloated – they consumed too much hard drive and hogged up memory resources like they owned the computer.
So why is it that in this day and age we can’t seem to do a better job of using those lessons to streamline web pages? One might say that it’s just novice approach – that in the rush to put out bigger and badder that we neglect to remember that not everyone has a 20Mb download speed. To be sure, yes, there is the app conundrum – quite nicely championed by Apple (see this interesting reference) – which dictates that smaller is better. However, I don’t think there would be much dispute that web pages have just gotten way too big.
So what defines big? There could be a lot of parts to the puzzle. First, of course, is just size (and I mean in bytes). Too many unoptimized images, redundant scripts and styles, inline hacks, blah blah blah. When did schools stop teaching students to optimize? How many content producers bother to test the effect of changing the quality level in Photoshop before handing off images to inventory? These and many other questions perplex me every time I have to wait more than 10 seconds for a page to load on my broadband connection.
A more commonly overlooked problem is the number of connections. I once asked my students if they understood the concept that the more the connections the longer it would take to get all the parts…or more succinctly, how much sewage can you pump through a pipe? Between both Art Institute and UNLV, maybe 1 in 50 had even considered the issue. To illustrate the problem, open up Firebug’s network tab (or use Fiddler) and watch the network exchanges as the page loads.
To be sure, yes, we have a lot of bandwidth, but like any resource, it has its limits and like any resource, if we squander it, it becomes a precious commodity. Back in the day, anything more than 6 connections on a page was considered bad form (and even consider that at one point, any cumulative page load of more than 65K was considered bad too). I can’t even begin to count how many pages load in excess of 50 connections and over 2MB per page.
Fortunately, caching exists, and not because of something developer-designers themselves are doing, but because CDNs, server-makers and browser-makers all realize that the developer-designer world is basically inconsiderate and that they will not bother to tackle the problem they’ve created.
Many sites, to their benefit, are taking novel approaches to solving the problem. First, there’s the stripped down interface – sites like Google, Twitter, and even Facebook to a degree all keep the content up front.
Now to be fair, there is a strain of developers (myself included) that are a bit anal-retentive about some things in the code realm. Take, for example, the question of using quotes to enclose attributes. HTML5 makes this an optional, but those of us old-timers who really watch coding practices generally prefer to leave them. But at what cost? Check out this article on JP’s blog. In it, he explains how @dom_monster tweeted about a page from GitHub that contained 4,517 quote characters and what effect that had on performance. Interesting question.
So at some point, preference will still take precedence. Where I work now, we are developing a new UI to help one segment of our user-base visualize big data, while providing tools to allow faster, more accurate, and more effective data mining in a web-based environment. And at the same time, the call to arms was (the very arbitrary) “make it cool.” Big task. Along the rapid pace of development, producing some 600 UX wireframes and composites, and digesting an entirely new methodology for managing syndication traffic sources, the interface bloated out to 1.8Mb (though fairly, it never has to reload since it uses trim Ajax exchanges and JSON objects to keep data intact).
Side note: Thankfully a little bit of re-engineering dropped that 25% to roughly 1.3Mb in the pre-alpha stage two weeks out from launch.
Anyway, my point is – we’ve become a culture either so strung out on putting new apps into the wild with all the UX bells and whistles, or coding so beautifully that it’s like watching a symphony unfold before your eyes reading the script, but less and less thoughtful about the end-delivery result to the user.
I live in NYC again, and frankly my AT&T connection sucks. Everywhere. And as mobile web consumption increases, so, seemingly, does the time it takes to get anything on the interface. We hear lots of advertising about 4G and faster this and that, but it doesn’t give us (speaking to the designer-developer community) an open license to forget what it’s like to have less. Just because we build an interface to work better on mobile doesn’t mean it’s truly DESIGNED for mobile. UI and tactility and gesturing and all that buzz today doesn’t mean we’ve truly built a the (web) app to work better on mobile platforms because the other half of the equation is speed of delivery.
My point is – think about both sides of the equation. Consumers, your users, are becoming data hogs. As data needs increase, delivery inevitably slows down. Trimming your component delivery is a critical part of the design process, particularly in the mobile environment, but just as much so in the desktop world.
Just a thought.
Like many developer-designers, I do spend hours checking out the cool CSS implementations. But an interesting tweet from Smashing Magazine got me to thinking about the idea of cool, the problem of trend, and some of the after effects of the rush to improve on CSS.
I grew into the Web before CSS really existed. In 1994, I joined an inspired group of people with little technical knowledge on an endeavor called Visual Radio. Over the course of the next four years, we carved out a pretty good niche in the market, largely due to a few very high profile projects (BizTravel being one) and some very profitable side projects (developing one of the first X.25 pad commerce gateways). While the company later imploded, I learned a lot of good lessons in that time, not the least of which is that human emotion and responsiveness to a site or interface is driven by two very different needs – the need to be informed and the need to be pleased.
Continue Reading →
Continue Reading →
Followed a tweet from @stephanierieger that was retweeted by @bdconf that led to this blog post (by Stephanie Rieger)which linked to a rather insightful article (by W. Wayt Gibbs, staff writer for Scientific American) containing a smug but funny but a little too honest comment by Nathan Myhrvold (Microsoft’s VP of applications and content…in 1997) that explains the conundrum of why despite the horsepower of computers and devices, applications always seem to run inefficiently
“Software is a gas,” he said. “It expands to fill its container.” In fact, that is more of a policy than a necessity. “After all,” he observed later with a laugh, “if we hadn’t brought your processor to its knees, why else would you get a new one?”
Reiger mentions ’Myhrvold also goes on to say that: “In demos, the new technologies are inarguablycool. Cool is a powerful reason to spend money.” Fifteen years later (the article is dated July 1997), little of this appears to have changed. Make of that what you will.’
Thoughts? Has the Web influenced this, or are web applications becoming so heavy that they aren’t any different than apps of the past, just using bandwidth AND processor in tandem? The application in progress here loads 1.4MB one time then caches the rest but does regular data exchanges that range from 10K to over 1MB. It’s not super efficient but it is balanced as much as it can be considering that it is a Big Data application. How do we define things like efficiency, bloat, over-featured, over-designed in an age when we have the power and the game is to keep right along the (bleeding) edge without falling over?
For several years now I’d had my students conjecture about the future of the Web, opening the door with the Berners-Lee quote:
I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.
Every year there were always some pretty good ideas fleshed out, but basically the sci-fi-like concept was that there would be a device that, via bots, utilized data from multiple sources, aggregated them, and devised some kind of intelligent guess to guide one through life. I’d give the example of a device that reminded you that you needed milk because it looked inside your fridge but told you to go to a different store than usual because the route to your normal market had too much traffic due to an accident and the corner deli’s milk was expired (etc, etc).
Continue Reading →
Continue Reading →
First off, as if it were not obvious, I haven’t been writing. I keep thinking that it might be time and then I get sidetracked and neglect to get back into it.
Lots of changes – new job, new place, new perspective. The new gig is in full swing – working on the design of a very kick-ass web interface for search syndication ad management. When one’s impression of internet advertising is what you see in AdSense and DART, it can become very skewed, as well as undeveloped. The world of search syndication is huge and amassing like a snowball rolling downhill. Even here we’re just beginning to realize the potential of some untapped areas and trying to develop an easy-to-use interface to both educate and improve yield for the customers, while simply trying to get our own arms around it at the same time. More on that actual development another day.
Anyway, just dropping my hat back into the ring. This will be an interesting, albeit exhausting, quarter.
Tonight was my last night teaching (maybe*). I started in 2006 teaching at the Art Institute of Las Vegas and after starting my Master’s program at UNLV, started teaching there as well. AILV was a much more practical approach, UNLV more theory and discussion. It’s had its up and downs, but in the end, it was a bit bittersweet.
The Things I Will Miss
Things I Won’t Miss
Anyway, I say maybe because there’s still a chance I might be returning for INF400 Web Security but not likely. So to those students who at least appeared eager that I would be back for one more round before UNLV shuts the doors on Informatics forever, I’m sorry I didn’t tell you but keep at it and don’t be afraid to email me if you have questions.
Following a tweet link from Smashing, I read a pretty good article by Shanshan Ma on UXmatters discussing the act of Flight Status checking on mobile devices. I’ve excerpted it below but couple of things.
First, interestingly, the article specifically asks for comments, yet (even when signed in) readers are never presented with a comment box. Frankly, I’dve just posted my comments there but now I feel compelled to write here.
Second, relative to the article itself … of all the sites presented, they all stink. The problem with Flight Status checkers is that fundamentally they all use similar methods, similar input tools, to acquire the data. What this means is that you get a screen with several selectors and several free-type inputs. Despite advances in the UI tools for both selectors and typing, they are still fundamentally difficult to use, particularly for situations that many users find themselves in when using these services (in my own case, I found myself driving in 6 inches of unexpected snow in NYC this October and trying to get a JetBlue status).
One of the nicest things about having been in the industry for so long is that I can reminisce and consider technologies that we don’t often see today but that may still have applicability. In this case I am referring to the “deck” principles used in HDML in early versions of phone browsers. The deck principle basically provided that multiple pages of data were transferred with each page call, reducing the number of times callbacks were required and increasing the individual interactivity by allowing data to be shuffled between “cards.” Couple that with good Ajax utilization, and you might have a pretty neat app.
For a good flight status checker to work, think in terms of the actual UI. In my own incident, I needed to not have to enter keyed data – just click and fire with easy-to-hit buttons and less on-screen information. What I propose is something more like this (and I apologize, I sketched this out real quick just now and sent it with my phone cam).
Here you get no more than a couple of selections per screen, always presented as click buttons. The beauty is that you can use Ajax effectively to pre-load all of the subsequent screens with minimal data transfer. For example, by coupling the Location Services data of the current location along with the user selections, you’d likely be able to guess the probable dates and flight numbers (no more than 3 days future, any given airport, within a 6 hour time window, the lookup is no more than about 40 flights).
Another problem I faced was that all of the interactions used POST, which means that it was impossible to go backward and still have the data intact (so I could modify one small bit and try again), forcing me to re-enter 6 fields of data each try. Using the hashtag approach, you’d also allow for backward and forward runs through the history.
Thinking through the user experience is part visualization, part interaction, and part data process and information architecture. Trying to remove any part of the puzzle leaves something to be desired in the final product. Flight status checkers are a great utilization of mobile web, but THINK the process, use it, determine what can be made better, and do it.
Continue Reading →
Continue Reading →
Nice…after many years of trying to get coding students to understand why one can’t directly access cross-domain resources and Web security students to understand its implications, the fourth IE10 Platform Preview features, amongst other things, support for CORS (cross-origin resource sharing). The full highlight list of HTML5-affected updates to IE can be found here but I am particularly gung-ho for the CORS and the video text captioning (which was always difficult in the past). It’s not that CORS wasn’t available in other browsers, particularly assisted by the including in the recent jQuery builds, but at least this sets the stage for better cross-browser compatibility in HTML5 applications.
Here’s a post on MSDN by Rob Mauceri with more details…
Having been in the industry for (eesh) 17 years+ now, I still remember (and regularly lecture on) the issue of bandwidth consumption and preservation and how poor performance is related to it. jQuery, which I love, is no pipe-hog by any stretch of the imagination, but it still comes with a bit of bloat attached in the form of methods you don’t need or use most of the time. Enter jquip, or jQuery-In-Parts, a stab at minimizing and modularizing the jQuery library. For more info and download, go to the jquip GitHub.
I’ll let you check out the rather extensive method and event list, but be assured that the things we really like about jQuery – the $(selector), the quick data methods, and several events for data handling – are all there. Plus there’s several plugins to tack on more methods and events without the total KB package jQuery sits on right now.
On the flip side, if we could just get everyone to DL the .js from the same source URL (such as //ajax.googleapis.com/ajax/libs/jquery/1.7.0/jquery.min.js) and reinforce caching, we wouldn’t have as much problem anyway.