April 29, 2011 - Marc Heuman, CIPS Alberta
Being in the IT sector, the range of discourse for a blog can be quite broad. I’m hoping that I can touch on just the right type and amount of content to satisfy more than just the editor’s request. Since there are more IT directions than politicians to vote for, I’ll dedicate this blog to several disparate topics.
Phone, tablet or pen and paper.
First up, I can’t help but blog about my current self-directed experiment. I purchased a tablet a couple of weeks ago. Seemed like a natural extension for me since I already had a smartphone. The line of thinking was pretty typical. Something I thought worked in a 4 inch form factor would work even better in a 10 inch form factor. Same operating system, same functionality. When I am at my desk, I tend to spend most of my time in documents, spreadsheets, project plans, presentations, email and calendars. Out and about, I rely quite a bit on access to all the same things, but more so to read rather than edit. What I found was that a 4 inch smartphone is “almost” enough, but the 10 inch tablet is too much. The phone has a great web browser, notifications, integrates all my personal and professional email and calendars, and when I’m bored, can play music, a movie, or even a decent game of backgammon. The tablet can too, but I have to carry it around almost like luggage. It’s huge! And not to say that I’m not always politically correct, but I can somewhat unobtrusively use my phone during a meeting to read or respond to an important email, check facts, etc. Try being stealthy with a 10 inch flat screen monitor. My phone also has both cell/data and WiFi connectivity, while the tablet is WiFi only. While I can easily set up the phone as a WiFi hotspot, I am then draining batteries on two devices, and I always have to keep the two of them together. I love being connected, but I’ve found that being able to put something comfortably in a pocket is more convenient. So… function over form for me. However, it is a lot cooler playing Angry Birds on a 10 inch screen. And that is the struggle. Convenience vs cool factor. What matters most to you?
When is an app not an app? When it is an operating system.
I can’t talk about smartphones and tablets without getting a little more geeky. I have an Android smartphone. It is honestly the first smartphone that I have not wanted to throw against a wall 10 minutes after getting it. I began my journey on a Palm Pilot. I loved that Palm Pilot as well. Seemed intuitive and once I learned the gestures, I could take notes almost without looking. Then I migrated to the IPAQ, then the Motorola Q, and then the Blackberry Storm prepared me for full screen touch sensitive displays. I’ve used pretty much everything but an iPhone. The catch is that while every device has their own unique approach, the utility of the device is the same. Email, surfing the internet, checking my calendar. I don’t really get the religious operating system wars. Capabilities are similar, Angry Birds is Angry Birds, and web surfing is web surfing. Yet we are bombarded with how many apps are in app stores, or which operating system is better. Users don’t interact much at the OS level. Users use applications. About the only thing I’ve learned from my own experimentation is that phones and computers don’t really mix very well yet. I don’t want a phone that takes 3 minutes to boot, nor do I want a phone that has to be reset because of a bad app or bad web page. A phone with anti-virus? I do enjoy the convenience of having both on a single device, but I still think we are not quite there yet.
If internet usage is like electricity, do I pay even when I don’t use any?
On the networking front, there seems to be a looming dark cloud over something called Usage-Based Billing (UBB). Carriers want to charge extra for those individuals that are “consuming” large volumes of internet traffic. The CRTC had given UBB provisional approval but has since been directed by the government to review that decision. UBB isn’t new, and likely most of us are sympathetic to the concept of paying for what we use. That seems to reflect standard capitalism. What is unusual is a charge model where it costs more to download content from the internet than it does to copy the data to a disk or flash drive, and then courier the drive to the recipient. Would these kinds of charges not be a disincentive to using the internet? We live in a world where email is already too slow a form of communication. Already, movies, software, operating system upgrades, voice and video calling, collaborative content sharing and many other capabilities can all be delivered to us through the internet. Soon, I can only imagine more people emailing 1080p home movies to the grandparents, or travelling parents video calling back home for hours. To me, almost everyone will soon be hitting an artificial ceiling where our daily lives consume our monthly quotas. Why create a sin-tax for over-use when we should be openly welcoming innovation, collaboration, and new and interesting ways to work and live together?
Lamenting the coming of a combination Dick Tracy smart watch/computer/phone/USB memory stick?
I’ll close the blog with a little about hardware and software in the workplace. Back when mainframes lived in glass houses, and IT people wore white lab coats, computing was expensive and resources were rare. Swing the pendulum way over to today. Computing is ubiquitous. Kids are hacking their PS3s to run Linux or old DOS games. Many if not most home PCs normally have the latest greatest versions of office, photo and graphics and other software. Millions of people share calendars and work collaboratively using Hotmail, Gmail, and many other cloud environments. I think one of the reasons that we are starting to define the BYOD (Bring Your Own Device) model is that many people want to use what they are accustomed to. The corporate world has always struggled with the concept of centrally-controlled hardware and software. The main driver has primarily been lowering the Total Cost of Ownership (TCO). More control meant lower operational costs. I remember the days when I couldn’t even change the background image on my “Personal Computer”. What I think is missing from the bigger picture of TCO, is the agility of the end-user. There is a cost to limiting the speed of an end-user who knows what they want and how to get it, just not necessarily using the corporate standard. Does it matter if an end-user uses an iPad to create content as long as it can be easily shared with others using different hardware or software? Does it matter if the end user uses browser A, or B or C? What is the cost of taking a user who is hyper-capable in an office suite, and then forcing that user to use a corporate standard unknown to the user, and perhaps with less functionality? I think we are quickly headed into a time where users are no longer users, but consumers of technology, both personally and professionally. And what seems like technology to us now is looked upon by our kids like chalk and a chalkboard. The numbers are dwindling of those of us who appreciate new technology in relation to the first time we played pong. With standards in document exchange, connectivity, security and a few other key areas, corporations are going to have to open up, become far more agile, and empower their end-users. A certain amount of trust and user-responsibility is needed, but remember, these are the same people now negotiating million-dollar mortgages. So if corporate technology standards are function, and end-user knowledge, experience and agility is form, which would you choose?
CIPS Volunteers and Members - share your blog posts by contacting us at email@example.com