Surveys and Mediocracy

In January, Kathy Sierra wrote a blog entry talking about how companies fail by choosing not to innovate. The supposition is that companies that are overly risk-adverse eventually fail. Though the article wasn’t necessarily groundbreaking, there was a graphic that immediately caught my eye. It showed a continuum running from love to hate, with a grey area in the middle. The picture showed that a company can be loved or hated and still be successful, but those stuck in the middle or “zone of mediocrity” are, to use her word, “screwed”.

While architecting and implementing IVR and other voice applications, one of the most common applications that I was asked to design were post-call automated surveys. These applications were triggered after an agent hung up – and either transferred the caller directly to the survey, or started an outbound call back to the original caller. The idea was that having an immediate survey after a call ended accurately represented the caller’s experience.

Most of the time, I argued against implementing these surveys.

Contact center managers feel that these surveys can be used to track an agent’s performance. By using the looming presence of an automated survey, the agent would have to always perform admirably in order to keep from getting low marks. The problem is that the results rarely represent real life.

The graphic above represents the reality of survey respondents. People who have a very polarizing experience: either excellent or horrible tend to respond to surveys. Those who felt that it was a sufficient experience don’t feel the need to respond. Call center managers believe that you can use incentives to get responses, but doing that skews the responses towards the positive side.

The other problem is that a caller’s happiness is rarely based on the agent and is more the result of the nature of the call disposition. If I call to get a rental car and there are none available, it doesn’t matter how nice the agent on the phone is, I wasn’t satisfied. On the other end, a typical agent who, by luck, finds that elusive rental car after I’ve struck out at three other agencies would get a very high response.

That’s why when a call center wants to implement post call surveys, it’s critical to understand the motivation behind implementing one. If they’re looking to do trending – looking at the results before and after implementing a new process or program, that’s fine. If it’s part of an overall call center “health check”, that’s OK as well. On the other end, if it’s being used for employee monitoring, training or “compliance”, then I’d have to advise against it.

Just because it’s new to you…

The second great offense to the history of bartenders passed (right behind calling anything in a cocktail glass a “martini”) is laying claim to creating something that, well, you didn’t create.

Case in point, the reference in the January issue (featuring commentary from yours truly) of San Francisco Magazine in which a bartender lays claim to having “created” the Aperol Sour.

The sour family, as it has been defined, is one of the oldest families of cocktails. Take a single liquor, lemon juice, simple syrup (and occasionally egg white or some other modifier such as a touch of grenadine, other liqueur or even a splash of juice) and shake the bejeezus out of it and voila – a sour. Add whiskey? Whiskey Sour. Add Pisco (and egg white) – Pisco Sour. Add Benedectine and Whisky (and an orange wedge garnish) and you’ve got a Frisco Sour. Add Midori and you get a… well a throw back to trendy bar in the late 1980s with underage teenage girls.

Now, there are variants of the sour which are really creating new, innovative cocktails. For the past few weeks, Scott Baird at Coco 500 has been working on a “Chinatown Sour” – a modification of the classic Whiskey Sour, but with Rye Whiskey and little candied ginger or ginger syrup as a modifier. The recipe is still being worked on – but the resulting drink is truly something new.

However, laying claim that you “created” the Aperol Sour?

Even if you accept that simply adding a new liquor to an established base cocktail is creating something new, the Aperol Sour has been regularly found all throughout Europe for years. In fact, Charles Schumann, the famous Munich bartender, published a recipe for an Aperol Sour in his 1991 bartender’s guide “American Bar” – a staple for bartenders worldwide and found on the shelves of most good cocktail establishments.

Now, I’m not naming names because I’ve been in the publishing game before, and I know that there’s a good chance that the change between a bartender calling this his “signature” drink versus it being deemed his “creation” lays in the power of the author and the editor. However, I see this happening far too often – people laying claim to inventing things simply because they never did their research. Obviously these people never had to go through the process of researching prior art when filing for a patent!

At a minimum, there are four reference guides that I use before ever claiming ownership of a cocktail recipe: Paul Harrington’s Cocktails, Gary Regan’s Joy of Cocktails, Charles Schumann’s American Bar and the Savoy Cocktail Guide. If it passes a cursory search in these guides, then there’s a good chance of it being something new.

Just because something is new to you doesn’t mean it’s actually new.

Next time, we’ll talk about bartenders taking a famous cocktail, mangling the recipe but attributing it to the originator (such as taking a drink designed to be served “up” and turning it into a longdrink).

The “Five Things” Meme

Bob has made me one of the latest victims of the “post five things you might not know about me” – so for what might be my last post of 2006, here are my five things…

  1. In 1999, I competed in the Northern California Golden Gloves competition in the 240+ weight class. After getting a “pass” through the first round, I went up against someone much taller, much heavier, with more experience and with a longer reach. I know you’re expecting me to say that I won, but come on – look at the odds. He didn’t knock me out, but I did have the experience of being punch drunk afterwards.
  2. After the January edition of San Francisco magazine is published next week, I might become a pariah in the San Francisco cocktail circles. Still, the people in the industry I respect understand how I can say that the local palate is still underdeveloped – and I think that the good people at Rye will still tap me on the shoulder to judge some of their upcoming competitions.
  3. Every Friday, you can find my wife and me at the end of the bar at Coco 500 drinking cocktails put together by one of the best bartenders in the city – Scott Baird. Sorry, did I say drinking? I meant researching for future competitions.
  4. Though I lived in Germany for almost three years, I can barely get by speaking Deutsch. I can, however, say that I have successfully spent many days at Oktoberfest – and by days, I mean days that started at 10am and ended at 10pm. It’s not about the drinking – it’s about the strength of your bladder.
  5. I never graduated High School. I have my Bachelors Degree (in Liberal Studies), but never finished my senior year at High School. I did, however, receive a Commonwealth Diploma from the State of Pennsylvania.

And on to the next victims… Jocelyn, Larry, Dave, James and Ash.
Tell us five things we wouldn’t know about you and then find five other suckers victims friends to bring into this network version of a chain letter.

OpenID and Promiscuity

Yesterday, Paul Madsen posted a small comment regarding the OpenID plugin for MediaWiki.

The plugin, as it has been created, gives the ability for Relying Parties (in this case, the MediaWiki sites) to build OpenID whitelists and blacklists. For example, the Relying Party could choose to only accept OpenID identity URLs from idp.myvauthid.com, or choose to reject all OpenID identity URLs from Livejournal.

Paul poses that this sort of model is antithetical to the whole idea of OpenID – that a Relying Party is bound by best practices to accept any OpenID from any OpenID identity provider.

However, when it comes to security – especially with the implementation of the OpenID AQE – giving Relying Parties the ability to specifically trust, or more importantly the ability NOT to trust an ID provider is critical. In a completely decentralized authentication model, such as OpenID, there will be the need to block rogue ID providers – especially once comment spammers figure out how easy it would be to set up an OpenID ID provider that always confirms an identity without requiring authentication.

As OpenID grows beyond wikis and blogs and becomes an identity system used for handling more secure or transactional data, the need to be able to trust specific Identity Providers becomes key. Methods such as the MediaWiki plugin may break part of the original vision of the standard, but it does provide the gateway towards OpenID’s future.

When Biometrics Goes Wrong

File this under the category: just because you can do something doesn’t mean you necessarily should do something.

There’s been a good amount of recent chatter online regarding the upcoming IE and Firefox plugins for the Polar Rose photo indexing service.

Similar to the initial intent of almost-acquired-by-Google Riya, Polar Rose’s goal is to create a search engine for faces in photos. As quoted from the company website:

“Polar Rose makes photos searchable by analyzing their content and recognizing the people in them.”

This works by getting people to identify people in their own photos and photos from friends (see this example from Flickr) – essentially using people to train the system. Using standard techniques from biometric facial recognition software usually used by law enforcement, these static photos are then used to create a three dimensional model. This is done by using a couple of standard position points (typically pupils, tip of nose, curvature of the mouth) as references and then calculating the position of the face.

By using meta-data encoded in the picture, it’s also possible to adjust the facial image based on the date and age of the photo. So, a picture of you clean shaven picture of you in the summer can be correlated with a later picture of you taken in the winter with a full cold-defying beard.

If you’re not concerned yet, think about the ramifications of this – photos posted online with plausible anonymity are no longer anonymous at all.

As a biometrics researcher, it’s been important for me to stay on the side of the authentication, not identification. Authentication means that there’s a claimed identity that you start with, and then the biometric system confirms it. For example, with the vAuth(tm) platform that by company has created, you have to provide your user ID (or an alternate identifier), then you have to provide voice samples to confirm your identity claim. Consumer/corporate Fingerprint systems work the same way – you present an identity claim (login ID, ID number, ID card) then validate that you are the person you claim to be.

Systems like Polar Rose’s are identification systems – it takes a photo, extracts the face, compares it against the corpus of collected faces and tries to identify who it is. Technology like this is better handled by law enforcement, not put in the hands of the general public.

There are millions of pseudo-anonymous pictures online. For example, I have a couple hundred pictures on Flickr. Some of these have my first name. None of them have my last name. Now, let’s say that one of my friends is a Polar Rose user – he or she could start identifying my face using my complete name – without my knowledge or permission. Now, let’s say someone else stumbles across a photo of me from somewhere else, like a candid from a cocktail night (where I am not mentioned in the photo commentary or credits) and is also a Polar Rose user. That person uses the plugin to see who I am and gets my full name. That person is now a single Google search away from knowing where I work, where I live, where I eat and drink from possible blog entries.

It’s a slippery slope to the erosion of privacy. Embarrassing photo from the office holiday party make it online? That photo of you in the skimpy bikini? The one from your buddy’s bachelor party? These are perfect examples of photos where you don’t care if they’re online because they’re pseudo-anonymous, meaning that if someone knows you in real life, they might be able to identify you from the photo. However, the majority of people who might see the picture don’t know who you are aside from whatever information you choose to attach to the picture. Consider these pictures to be generally available.

The ethics of using biometrics for identification are complex and murky. Putting a potential stalker’s tool into the hands of miscreants, stalkers and pedophiles isn’t something to be taken lightly – and a simple license agreement stating that the tool can’t be used for stalking just isn’t going to be enough.

Strong Auth Drives Conversational Access

When I’m wearing my analyst hat, I’m constantly asked if “this the year for…” Is it the year for VoiceXML? The year for Speech Recognition? The year for speaker verification/voice biometrics? The year for VoIP? For the past year, I’d answer every question the same way, “2007 should be a big year” because the robustness of the technology, combined with a maturity of the vendors in the Conversational Access Technologies (CAT) arena lent towards the adoption of all of these technologies.

I still believe that 2007 is the year when we do turn that corner, hit the end of the runway and take off, cross the chasm and meet up with every other business cliché that describes what happens when the latent need for solutions breaks through the fear factor of being an early adopter and sales start to ramp up. However, next year’s growth is not due the technology or the vendors, or even cost avoidance. The next year’s growth will be based on meeting federal mandates such as FFIEC.

The first generation of Conversational Access Technologies were found in the financial industry, which brought us the first widespread use of IVRs for handling self service for credit cards. It’s the financial industry that will also drive adoption of the next generation of CAT.

The trigger, as mentioned in previous reports and advisories authored by (and for) Opus Research, is the FFIEC guidance. The guidance stated that in 2006, financial institutions needed to implement multi-factor authentication for the web. In 2007, this extends into telephony channels as well.

Early implementers of multi-factor security at banks primarily went down one of two paths: One-Time-Password generating tokens and Shared Secrets.

One-Time-Password generating tokens were obvious for many banks, as internally they have been used for years to restrict internal access to secure platforms. Solutions such as RSA’s SecurID and Verisign’s VIP generate a new numeric PIN every 60 seconds. A user would log into a website with their UserID and password, then enter the generated PIN and get access. It’s a very straightforward solution, though it has been considered expensive as each user needs to get their own token which displays the one-time password. RSA is considered the market leader in hardware based OTP technology.

Shared “Secret” Information makes up the other predominant solution for handling verification. There are three major categories of shared secrets:

  • Self-Supplied Secrets: in this case, the system asks you, at the point of registration, to answer a number of questions (What city were you born in, what is your favorite color) and at login, you will be asked to answer one or more of these questions.
  • Historical Data: in this case, the system uses historical information ranging from “what was the amount of your last deposit” to “when did you pay off your car loan” to “what was your address in January 2001, gleaned from a number of public and internal databases. You don’t pre-answer any question.
  • Photo Preferences: also pushed to market by RSA as a result of its PassMark acquisition, this method has you pick a preferred photo out of a selection of up to thousands of photos and at login, you’ll have to select that photo again to log in.

The failure of shared “secret” information is that it is rarely secret, and more important, the more this “secret” information is used, the less secure it becomes.

For example, when it comes to self-supplied secrets, the most common questions are easily found on the web or in publicly accessible databases: birthdate, mother’s maiden name, pet name, etc. How did Paris Hilton’s Sidekick get “hacked”? Someone figured out that she used her dog’s name. More important, is the fact that most websites and services ask the same information: birthdate, street where you grew up, mother’s maiden name, favorite pet – which means that your information is more and more out in the open. Historical data is also challenging – when I went online to request a copy of my credit report, it took me five minutes to figure out if I ever had a student loan from a specific bank, as the lender has changed multiple times based on consolidations and one bank selling the loan to another bank.

However the largest challenge with shared “secret” information is that this information is very much only applicable for the web. Securing a phone transaction with a picture is ineffective, and being able to speak freeform text to answer a historical or shared secret question isn’t technologically feasible. The only option would be to present a multiple choice for the user to answer, but best practices and common sense rule out any security method where a potential answer is given at the time of the challenge.

This is why 2007 becomes the year of the CAT.

The mandate for banks to implement multi-factor authentication for the web left the field wide open for vendors to propose “creative” solutions to achieve FFIEC compliance. However, once voice is thrown into the mix, the list drops dramatically. With voice, there are two available methods for authenticating: touchtone and voice. This leaves two methods for strong authentication: speaker verification and one-time pins.

Now, though I am the CTO of a voice biometrics firm, from a functionality perspective – both solutions solve the problem. Asking a user to input a one-time numeric PIN generated by a hardware token or to leave a voiceprint to gain entry both satisfy the requirements for multi-factor authentication.

More importantly, both solutions can be easily implemented for web and voice, assuming that the bank has a strategy for implementing a well thought out CAT infrastructure.

Implementing one-time numeric PINs for the voice follows the web in a CAT environment. In this case, the voice application would ask the user for the OTP, pass it over to the appropriate authentication system and get a response back regarding the user passing or failing the authentication request. Since the OTP system is already integrated to a web process (typically a web service/SOAP call), the voice application can make the same call (simplified with the use of a VoiceXML 2.1 request) and parse the same response to gain access.

Implementing voice biometrics for a web process, however is more challenging, but still easily handled. The typical process, as shown by vendors ranging from Authentify to VxV Solutions (my company) show a process where a web user, after starting the login process, is instructed to call a phone number and authenticate his or her voice, either receiving back a one-time pin (also called a Soft-OTP) or being redirected back to the web application after passing the biometric claim.

In both cases, the key is that the bank can now standardize their processes for handling both web and voice transactions. However, standardizing the processes doesn’t necessarily mean standardizing the method. It is expected that banks, and enterprises in general, will support multiple authentication methods based on the user’s needs and status. For example, shipping an OTP token with the bank’s name engraved on the back may cost upwards of $30 per user, but for key clients, the cost may be mitigated by the fact that it is a very fast way to log in. Conversely, a voice biometric solution is typically much cheaper, though less convenient for web users as it requires the user to make a phone call to enable their web session.

What is expected is the growth of a new range of multi-factor brokerage services, such as Ping Identity’s PingLogin solution: designed to let a user select the preferred method of providing multiple factors. In this case, consider a preferred bank customer. He (or she) may have an OTP token provided by the bank and a fingerprint scanner at home. The bank may have also enrolled a voiceprint. When the user logs into the website from work, he could use a voiceprint or OTP – when calling in, he could use the same voiceprint or OTP, but when logging in from home, all three methods could be used. Fingerprint, in this case, would most likely be the fastest and least obtrusive.

Instead of integrating each of these solutions into the voice and web applications, and requiring separate dedicated logic, the authentication broker would simply determine which methods are available, which can be used based on the mode, and then allow the user to select the method of his or her choice.

Again, the benefit of this type of broker is now exponentially increased based on the implementation of CAT. Common SOAP interfaces and easy integration into voice and web applications allows for this choice of flexible multi-factor authentication.

If 2007 is the year that CAT turns the corner, or crosses the chasm, or whatever we’re calling it these days – I’m looking towards 2008 to be the year of federated security. You can’t have all of these banks making investment in strong, multi-factor authentication without someone finding a way where they can monetize the implementations – and leveraging these internal identity databases and authentication methods lends towards these FFIEC compliant banks looking towards becoming independent, trusted Identity Providers (IdP). The currently blog-centric OpenID movement shows the beginnings of a decentralized security model where a user could use an identity at their bank to get into their healthcare account, or into their cable system to get their latest bill. Adding trusted Identity Providers helps move the focus of OpenID from blogs to transactional accounts such as banking and finance.

An appropriate metaphor?

There’s been some chatter lately in Identity 2.0 circles regarding Cardspace. Cardspace is Microsoft’s latest take at controlling participating in the nascent user-centric identity space.

The Cardspace metaphor seems harmless enough – you create a number of identity “cards”. The idea is just like you have an employee ID and a driver’s license with your home information which you keep in your wallet – you can have a number of virtual IDs (I’ll call them InfoCards in reference to Cardspace’s internal project name), which you store in your browser.

Actually, to take it a level deeper, there are two types of cards you can have: managed and self-asserted. The managed cards are like the drivers licenses and employee IDs of the Cardspace arena. Managed cards have the information on it locked by the issuing party. Just like you have to get your address on a driver’s license changed by the motor vehicles department – when you change your information on the managed InfoCard, you have to get the InfoCard provider to change the data. In contrast, there are self-asserted InfoCards, the equivalent of a business card. Anybody can go to Kinkos and get a business card printed up asserting that the holder is a certain person, at a certain address and working for a certain company. There’s no real trust involved here – it’s more of a way to speed up the exchange of commonly needed addresses so you can conduct basic business.
Now, in theory, the metaphor makes sense. I have business cards for my company, the firm I occasionally consult for, and personal business cards that I give to friends and people in the bar industry. I also have a drivers license which is my primary form of validated identity credential – and I keep both in my wallet.

This is where the Cardspace metaphor starts to break down. I can choose which wallet I put my cards in, and I can put that wallet into any pair of pants I own (or even a jacket pocket). Cardspace is Windows-centric, and not just Windows, but Internet Explorer only. Yes, there are homegrown plugins for Firefox, but they’re not technically supported by Microsoft. Though I have been able to jerry-rig Cardspace to work in Firefox on my Mac, it’s less than optimal.

Now, since the cards – even the managed ones – are loaded into the browser, it’s not very portable. If I only have one or two machines, managing these cards is not horribly challenging – but I have three different identities on my computer (home, work, development) and each of these have three browsers (Firefox, Safari, Opera) – that’s nine browsers. Then add in the Parallels environment and the three browsers there (Firefox, IE, Opera) and I suddenly have 12 different browsers where I could have to store these cards. Plus, there’s the browser on my smartphone – and I haven’t even started to think about what happens when I occasionally use a colleague’s machine or even a shared terminal at an airport or hotel.

It’s the equivalent of saying that I can only carry my drivers license in a State of California issued wallet, and if I use another wallet, my license may not fit or possibly will not always give the same information… and I’ll need another wallet for each pair of pants.

Even if I was primarily a PC user, the model still doesn’t work.

Dick Hardt, one of the real evangelists for the user centric identity movement has introduced Sxipper, for populating “business card data” into web forms and using OpenID for the authentication method. The challenge to make Sxipper truly useful will be the ability to store the ID information within a secure Sxipper server, and that when I log into skipper from any browser, it fetches the current information – and if there are any changes locally, it synchronizes back to the central server. Though I haven’t broached it with Dick, I would expect this sort of architecture will be made available as an option as the product matures. As a user, I may choose to keep some identities local, some cached to a local machine, and some only fetched as needed from the network – a perfect solution for shared or public machines.

Owning the amount of desktops as Microsoft could become the dominant, if flawed, metaphor for the next wave of identity. Then again, it might just be a house of cards.

The New Premium Margarita – Was the original broken?

There’s a movement in the high end San Francisco cocktail houses when it comes to making classic margaritas, spurred by the popular version made by Julio Bermejo at Tommy’s Mexican Restaurant. It seems that the new de rigeur formula is this:

  • 2oz Tequila
  • 1 1/3oz Fresh squeezed lime juice
  • 1/3oz Agave Nectar

This recipe, a 6/4/1 (6 parts tequila to 4 parts lime to 1 part agave nectar) was published in the August 14, 2002 edition of the Washington Post, though there are probably earlier references online as well.

This recipe does create a very good margarita – and I’ve enjoyed many of these at cocktail houses spawned by Julio’s followers, such as Tres Agaves (co-owned by Julio). But is this Margarita significantly better than the classic margarita?

The classic margarita, as I have always understood, is a 2/1/1 recipe:

  • 1.5oz Tequila
  • .75oz Triple Sec
  • .75oz Lime

In fact, when I look at the New Margarita, it’s actually not a margarita. It’s actually a tequila daiquiri. The original recipe for a daiquiri from the golden age of cocktails (1920s)

  • 2oz light rum
  • 1oz Lime
  • 1 bar spoon powdered sugar

Adherents to the Julio method of margarita making believe that the use of agave nectar, an expensive relatively flavor-neutral sweetener, enhances the agave flavor in the tequila… which I believe it does – to an extent. However, the new margarita method loses something in the process. The original margaritas used triple sec, a slightly sweet orange liqueur, to add both sweetness as well as a little more orange flavor. Varying the liqueur (maraschino, curacao, Cointreau, Gran Marnier) changes the flavor, allowing customization of the cocktail based on the flavors brought from the tequila. Yes, it’s possible to make a bad margarita if you pair improperly, but it’s also possible for a talented bartender to make an amazing margarita, playing with many levels of flavor. The new method makes a consistently good margarita, but that’s all.

The Killer Smartphone App

The rumor mill is burning the midnight oil regarding the eternally impending release of Apple’s Smartphone offering – the iPhone (or iChat AV Mobile, depending on which site you consider canon for scuttle such as this). This possibly revolutionary device is thought to be the ultimate merger of media, data and telephony – offering full iPod music playback, synchronization with iCal, Address Book, Mail and .mac. Either it’ll support Cingular, or T-Mobile, or all of the mobile carriers and support 3G data networks… again, depending on the rumor source you trust.

But in response to all of the chatter about the media and synchronization capabilities, there is a true killer app that, if implemented, would make the iPhone the hands-down best smartphone in existence.

What I’m looking for (and most of the smartphone community in general) is a smartphone, with the emphasis on phone. As in – I primarily want to be able to make and receive calls.

11 months ago, I picked up an o2 Mini S on eBay. In the US, it’s called the Cingular 8125, T-Mobile MDA or i-Mate k-jam. This device was touted by the telephone rags to be the best fusion of PDA and phone: a small enough factor that it’s comfortable to carry, a full day’s worth of charge, slide-out QWERTY keyboard and touchscreen. For the first few months, I was generally happy with the device. I could synch it to my office PC and home Mac, it the bluetooth worked fine, and the text messaging capabilities were pretty darned solid. The music and video playback was smooth and the camera, for a multifunction device was better than average.

However, what I started to quickly notice is that the phone services, well, they weren’t just sub-par, they sucked.

The problem isn’t just endemic to this one device, but to Windows Mobile devices in general. The problem is that the telephony functions are constantly fighting for CPU cycles with the general PDA functions. The result can manifest itself in many ways – not being able to get resources to make the phone ring, or not having the resources available to accept me pressing the “answer” button to receive an incoming call. This week alone, I’ve missed three calls simply because the phone didn’t respond when I pressed answer, and it isn’t just me. This problem also impacts most of the touchscreen Windows Mobile 5 devices.

The biggest problem for me, however, is the voice stability. As my work is primarily in speech technologies, I can easily tell how well a telephone device encodes an audio stream. Now, the HTC mobile devices (the manufacturer for most of the non-Samsung/Motorola Windows Mobile devices) have already crippled the phones by implementing a very poor microphone – a pinhole style that, even under the best conditions (as tested with a simple audio recording application) creates poor quality recordings. This is further complicated because of a core Windows Mobile issue.

It seems that Windows Mobile has made the audio encoder (codec) just another software process that has to fight with the other applications for CPU cycles. When the CPU is occupied with other tasks, the encoding is crippled to the point that simple speech recognition can be dramatically impacted. Again, this was simple enough to test by calling into one of my systems without any other apps running, then with email, then with email and a video. Overclocking the CPU helps, but not consistently and not enough.

Though this problem impacts my WM5 device, it impacts every smartphone out there. Count the number of Blackberry users who also carry standard cell phones because the phone experience on these devices aren’t very good. Treo users seem to be in the best boat, but it’s still less than great.

It seems that if you want a successful smartphone, it needs to be just that – a smart phone. Apple has the marketing and mindshare, but can they actually create the compelling device?