Waging, Winning the War of Enterprise 2.0 and Getting Confusion Out of the Way

•June 26, 2011 • Leave a Comment

Fighting the Web 2.0 War?

I’ve met with so many people asking me what it was like rolling out Web 2.0 capabilities across one’s enterprise that I feel compelled to put together this post to outline what I have learned doing it and personal myths I debunked going forward. The title of the post is self-confessedly provocative, although it may in my opinion only reflect how much day-to-day effort it’s going to take you to get your enterprise practicing the Web 2.0 approach that salespersons fob onto just anyone these days – it’s even getting to be a brazen hard-sell.

From Web 2.0 through Enterprise 2.0 to Web 3.0

I start from the assumption that most readers of the present blog will already be familiar with these concepts, but there’s no harm done in getting out of the way the obvious stuff to prevent later confusion.

Web 2.0 was originally coined by the good people at O’Reilly Media, an American publisher of books on computing from Python through XML to Linux. More specifically, Tim O’Reilly, one of the two founders of the company, originally used the phrase in connection with those online enterprises such as Amazon that had weathered the dot-com bubble that burst circa 2001.

What was so special about them was that their business model relied on being influenced by their customers and favored their active, freely-offered participation as in the case of Wikipedia, Flickr and Delicious for instance. Most of them relied less heavily or at least conspicuously on ad placement.

Copyright 2005 - O'Reilly

Most of the characteristics refer to functionality/activities, when I believe that Web 2.0 has progressively come to stand for tools used to carry out these activities online. Namely: blogs, microblogging, wikis, networking and sharing services (bookmarking & tagging).

Coined by Andrew McAfee principal research scientist at MIT‘s Center for Digital Business circa 2006, Enterprise 2.0 refers o the use of Web 2.0 tools in the enterprise setting. Namely: blogs, microblogging, wikis, networking and sharing services. Namely: blogs, microblogging, wikis, networking and sharing services (bookmarking & tagging).

As opposed to what I hear a lot of people say, Web 3.0 is no Web 2.0 evolving into some higher form. Web 3.0 is a technical approach to managing the wealth of data on the Web/ the so-called “info-glut.” Its main goal is to relieve humans of the chore of making connections between items of information. For instance, the burden is on you these days — although it’s getting better — to secure train schedules and available hotel rooms where you’ll be going weather permitting — to establish these two connections, by going to several sites.

The vision behind Web 3.0 is to have computer agents connect those tedious dots. Ideally, they could make datasets — websites, pictures, tables, files, etc. — interoperable, i.e. make it possible for agents to query all such datasets and weave (semantic) connections between and among them, regardless of how these datasets were originally encoded, whatever metadata schemas were brought into play or whatever software is meant to open them.

Programs can and will connect those dots only if they can be instructed on how to do so through human modeling of knowledge areas — the tangible human end product being ontologies. Linked Data is a global endeavor/ methodology to achieve just that, one of the most remarkable achievements of which being DBpedia.

For more background on Web 3.0, consider the 2006 article by Victoria Shannon for the New York Times here. For a concrete example of what can be achieved with interoperable datasets, review FAO‘s Agris. The work FAO has put in is simply breathtaking. For more information on ontologies, consider my post here.

What’s All the Clamoring About?

The present post being only an opinion piece, I’ll opine that a couple of assumptions are made that never get voiced and that when carried over suggest possible analog avenues for streamlining the way knowledge flows in the enterprise setting. The first list below recaps assumptions about Web 2.0; the second, explicitly spells out parallel statements for Enterprise 2.0. The numbering scheme is consistent in both list, i.e. 1 corresponds to 1.

  1. Everybody contributes to Web 2.0
  2. Everybody contributes free of charge for the interest of the greatest number
  3. The greatest number benefits from the knowledge captured in Web 2.0 applications/ services
  4. Everybody finds it easy to contribute something albeit minor edits
  5. There are so many creative people in Web 2.0 that if only one could harness their collective intelligence any problem would be solved
  6. Though there is no real supervision of how Web 2.0 grows exponentially, everybody does a great job of organizing themselves
  7. The dash of  Web 2.0 practitioners is attractive and desirable

This rose-tinted understanding of Web 2.0 is applied to the enterprise setting:

  1. Everybody in th enterprise will participate in Enterprise 2.0
  2. Everybody contributes for the interest of the greatest number with no regard to the time they invest — they may even invest their own spare time
  3. The greatest number benefits from the knowledge captured in Enterprise 2.0 applications/ services
  4. Everybody finds it easy to contribute something albeit minor edits
  5. There are so many creative people in the enterprise that if only one could harness their collective intelligence any problem would be solved
  6. There is no real supervision of how Web 2.0 grows exponentially, and the same should be true of Enterprise 2.0 if the enterprise is to observe the essence of Web 2.0 is about.  Everybody in the enterprise could probably do a great job of organizing themselves
  7. Enterprise 2.0 sounds like a modern way of developing the employee

Reality 2.0 Bites

On the face of those statements, one immediately sees that the reality behind the smokescreen of a whole segment of the software industry may assume a quite different shape. Believe me, it is simply not true that everybody will be interested in Enterprise 2.0. Some will take exception to yet another piece of software being forced onto them — software fatigue, it is called. I do not intend to make use of of a very trite idea which goes something like “knowledge is power,” because that cliché is nonsensical (understanding how to apply knowledge is the real power, although one would have to specify the kind of power one is talking about). But basically, knowledge retention is a given in whatever context. Enterprise 2.0 will not solve the age-old problem. Only those who share will share; it is quite easy to dissemble in that environment too, by simply mot participating. It is simply not reasonable to think that you might fight that mindset off.

Also, although your company will first be allured to the idea of leveraging 1-7 as laid out above, it may get cold feet sooner than you’d expect when the degree of decentralization (i.e. who get to say what in the enterprise setting) Enterprise 2.0 implies to catch on at all comes home.

A rule of thumb you can take away from this post right now is that the nitty-gritty of Enterprise 2.0 implementation gets overlooked all the time and that what it entails in terms of slight power shift (i.e. who get to say what in the enterprise setting) is not well-known, little understood and generally blown out of proportion when hit on by employers or consultants.

The Plot Thickens

When you first set out to implement an Enterprise 2.0 environment, you naturally turn to the literature to glean tips and tricks. If you’d ask me for only one book that says it all and offers clear explanations and actionable advice, I would refer you without hesitation to Wikipatterns and its companion site. It is home to a community of users of the Confluence suite that I would recommend for ease of use and great looks. More about usability in a later section.

Unfortunately, actionable tips and tricks are few and far between and tend to be not very helpful — generally speaking, you’ll end up collecting the same run-of-the-mill rehash of how important it is to meet your customers’ needs. Which means little as obviously you do things for a reason if you’re in your right mind. I have already had the opportunity of looking at that issue: please consider my post on the elusive internal customer.

What I really wanted help on when I first envisioned rolling out Enterprise 2.0 was a clear-cut answers to the following questions:

  • How do go get your project off the ground?
  • How do you make sure that it stays the course you set initially?
  • How do you get from stage A to B to C and  so forth?
  • Whom do you work with?
  • What Enterprise 2.0 suite should you go for?
  • What are the guidelines for usage of Enterprise 2.0?
  • How do you ward off unhelpful attitudes and what do you make of them?

I’ll be giving straightforward answers in the rest of the post. I’ll start answering the last one as attitudes put me off my game many times until I was familiar enough with them that I could brush them off easily without frustrating the individual giving attitude.

Humans 2.0

The funniest story that comes to my mind is a two-episode account I wouldn’t believe if I didn’t actually live it. Blogs and wikis get everybody so excited — please remember these analogues — that they’ll let you know really soon they just want in. That happened to me. I’d anxiously imagined that people would never be won over to spending some of their time going Enterprise 2.0, but they almost begged to join. Wow, I thought: minimal support to be drumming up for.

When it came to brass tacks, attitudes changed significantly. Those early adopters would come up with questions that testified to such a disconnect from the enthusiasm and good will they initially offered that I came home scratching my head trying to unpack a line I was left to struggle with, “all right.” “You want my help running that thing [the Enterprise 2.0 tool suite]. Sure…I told you I would. But can you convince me now why I should? What’s in it for me?” Take that.

It’s called getting cold feet. But I didn’t take it that way. I was just confused. And then it dawned on me that I would probably have reacted in a similar way.

Coming home, I sat to my computer and bumped into a new bookmark sharing site. Looked nice, promised to do things the others hadn’t hit on. Registered. Toyed with it. Started thinking: why on earth did I register? What’s in it for me? Maybe I should rethink filling in my details. Typical case of the kettle calling the plot back, uh?

It’s alright then to act on impulse because of what potential you think something holds and then to step back. In the enterprise setting it makes even more sense.

Long story short: giving some background and explaining away unreasonable fears, I finally won over my initial early adopter. I had surely met with some more resistance that would not let up until I proved it wrong.

Some usual questions will crop up and you’ll have to meet them head on.

  1. “What if everybody in the enterprise start badmouthing everybody else?” That’s unlikely. Those so inclined have just found themselves a new way of getting themselves fired.
  2. “What if anybody says something that is not true, right or correct in the wiki, blog  and so on?” Well, what’ s stopping them now?
  3. “What if anybody lifts off content and pass it off as their own?” Well, what’ s stopping them now?
  4. “What if anybody leaks company info?” Well, what’ s stopping them now?
  5. “Am I responsible for what I commit to the wiki or blog?” Well, aren’t you always for whatever you do in your enterprise?
  6. “Who can vouch for the quality of what I submit?” YOU are. (You cannot be too emphatic here.)
  7. “What if I can’t submit anything?” Not a problem, you’re not required to.

If addressed correctly and head-on those questions will go away, believe me. The only issue anybody really has with Enterprise 2.0 is that everybody’s looking. Ah. The Eye of the Beholder. I called it community controlled and it went down smoothly.



Information Science Redux

•June 19, 2010 • 1 Comment

The Anti-Copernican Revolution; or, back to Ptolemy

The Copernican Revolution refers to Copernicus’s paradigm-changing view of the sun as the center of the Solar System, away from that of Ptolemy, who considered the Earth as its center.

As a newbie fresh out of Library and Information school, I’d been forewarned that it would happen to me at some point: somebody would try to fob off LIS repackaged on me and sell down the river my trade.

In less cryptic terms: I have now been subjected to the spiel of consultants, whose groundbreaking concept it is to have corporate users know how to search for information and, what’s more, engineer search queries and finally tap into resource directories to track the right info. Also, thrown in for good measure was the possibility of monitoring “datasets” or “knowledge bases” and be alerted when selected, relevant keywords would be detected by a leading-edge algorithm (I’ll give it to you: no prior selection of keywords needed). Boggles the mind, uh?

When tasked upon to lay out some kind of rationale behind such service, the answer generally goes: “well, there’s no one who actually does that in our information-starving world.” Point taken.

In a jiffy,  I went from believing that trades gravitated around the Sun of all Jobs to believing that they actually revolved around myriads earths, with disoriented customers getting attracted to them for better or worse. Now LIS is the job of anyone and I really mean anyone.

Stereotypes Die Hard

I have absolutely no bone to pick with non-LIS people plying info as their trade, except when their pitch disregards and disparages info pros as asocial hermits bent on collecting newspaper clippings that nobody cares about.

Weak Signals and the Amorphous Shape of Competitive Intelligence in Its Formative Stages

I guess that you’ll want more context to grasp what I’m saying here, so here goes. I recently attended a great conference on competitive intelligence and I had a blast listening to and sharing with well-seasoned practitioners how they went about their daily jobs.

Big on their agenda was how they would pick up on weak signals to make sense of oncoming disruptive events or trends that will go unnoticed for their lack of visibility either in major online media or Deep-Web bibliographic databases.

Of particular interest to me was how such intelligence breaks the mold of peer-reviewed information, as it escapes expert validation for its very amorphous nature.

That intelligence takes its own time eventually hardening out to take definitive meaning and shape doesn’t mean that you cannot ferret it out in its formative stages. In order to do so, you must have in place some strategy to net it — setting up alerts, attending conferences, tracking company activity, and so forth.

At least that was what I thought made sense. But I was in for a big surprise.

Continue reading ‘Information Science Redux’

Ontological Gobbledygook vs the Simple Truth about Ontologies

•May 12, 2010 • Leave a Comment

The Ontological Conundrum

What is an ontology? Is it to do with philosophy? With Artificial Intelligence? Library Science? What I have struggled with to grasp is why so many champions of their use should obfuscate a perfectly simple indexing and cataloging tool so that its main goals and benefits get looked over in the process.

I have meant to put up a post ever since I started this blog but have had a hard time finding an angle that I felt comfortable with. This is my shot at it.

Enter Obscurantism

For the longest time, ontologies and attempts at making sense of them such as Gruber‘s  or Smith‘s simply boggled my mind, as they seem strangely to rely on the most difficult philosophical or A.I. systems out there, which are certainly a welcome addition to their respective discipline but bring little in terms of clarification.

Just for the record, I firmly believe that reference to philosophy or A.I.  may as well be jettisoned.

Ontologies Unpacked

The easiest way to explain ontologies is to:

  1. lay out what issues/ challenges they attempt to solve
    • Make interoperable datasets described using various metadata element schemes
    • Gather together widespread information sources
    • Enable automated working out of relationships between resources (broadly speaking: publications, concepts and individuals)
  2. explain what capacities they bring into play
    • Descriptive languages (RDF vocabularies) amenable to computer processing, which all derive from RDF or RDFS
    • Language for expanding what metadata RDF vocabularies accommodate
    • Query languages such as SPARQL to allow computer agents (programs) to draw inference (i.e. conclusions…) based on the relationships that metadata element schemes made amenable to their processing make possible through RDF, so as to shed light on relationships that were not noted before (i.e. by human beings snowed under the sheer volume of available information our there)

Further Exploring

Surprisingly enough — you would expect them to be much more cryptic than they are here — the best jargon-free introduction to the subject comes directly from W3C.  Also, I’d like to refer you to Towards an Infrastructure for Semantic Applications: Methodologies for Semantic Integration of Heterogeneous Resources by Liang et al. that you can download from FAO here, which gives more of the context for using ontologies than I have here.

Have a look at the videos by NCBO, which make a great job at laying out the basics — without the mumbojumbo.

Social Networking Reporting on Inaccessible Websites

•April 28, 2010 • Leave a Comment

Enter HerdictWeb

This short post is to report an excellent social networking site that leverages crowdsourcing to track inaccessible sites the world over: HerdictWeb. I just love how they mash up GoogleMaps with reports by participants. Of especial interest to me also is the collection of reports delving into countries’ trends of online censorship and their interactions with local ISP’s.

Berkman Center for Internet & Society

The whole project originates from Berkman Center for Internet & Society at Harvard University, which deserves mention. I find myself going back over again to the Berkman Center for Internet & Society site, a research center probing into cyberspace and sharing online its findings since 1997. I highly recommend it for:

  1. Its reports – a good example being the rather impressive report on youth, privacy and reputation here
  2. Its podcasts with its Radio Berkman in-depth conversations here which focus mainly on IP in the age of the Internet and online privacy
  3. Its educational resources — a great instance being Copyright for Librarians here

Why Store?

•March 29, 2010 • Leave a Comment

I’m not sure that the epiphany I had a couple of months back will come across well, being an info pro as I am, essentially speaking to other info pros. Especially when that epiphany brought about another one, even less “palatable.” I’ll risk it and spit out what I need to voice out there.

What was that epiphany about, then? It was about the fact that on a personal (nonprofessional) level I couldn’t bother with actually purchasing music that comes on a physical medium or even, in a lesser measure  it’s true, books — although I suspect that my reserve there stems from the lack of maturity of devices such as Amazon’s Kindle. I have ditched the CD’s and went over to iTunes and MP3 only, even if that means I am held captive by Apple.

But does it really mean that? I think not. There’s nothing stopping me from making business with Amazon or HMV, is there? Money should not be thrown out the window: point taken — and it is one robust argument, but do bear with me. Time will tell. Note however that iTunes makes it possible for you to burn CD’s once you purchased anything from them. So, if you’re bent on laying your hands on something tangible — there you go.

This change of attitude toward owning physical/ digital material has since leaked into my work. Namely: making accessible (electronic) resources — be they articles, reports, and what not —  is part and piece of our jobs, right? But is actually keeping/ storing primary documents in-house part of that too? This is a highly debatable yet sensitive issue as, on the face of it, the equation that one may draw from saying no is that in that case we’re all out of a job.

But I think not. To get the obvious out of the way first thing, let me point out that I firmly believe that libraries the world over must still hold down their role as being nonpareil custodians of the world’s knowledge, which has essentially come in the form of books and other printed materials for ages and I wouldn’t rely only on iPad or the Cloud to convey/ store centuries of human ingenuity.

When it comes to the private sector and more specifically the corporate world, I’m not so sure. Let’s face it Thomson Reuters (ISI Web of Knowledge seems quite stellar, doesn’t it?) or Wolters Kluwer of Ovid fame do a very good job of providing metada records to fulltext and companies such as Infotrieve can even store your purchased materials for  fee, getting out of the way the painful and ubiquitous question of copyright.

I even guess Infotrieve actually makes excellent use of the best of the best catalogers out there too. We’re absolutely not out of job, are we? How about those so inclined reposition and remarket their skills as metadata pros?

To be well done cataloging in an enterprise needs human clout and considering the job market outlook these days, that is not happening — the reverse is the rule. I would certainly advocate for retooling information centers’ human resources for value-added tasks such as bibliometrics or CI monitoring. May sound like the same rehashed stuff. It’s more a call to arms, really.

The Elusive Internal Customer

•March 25, 2010 • Leave a Comment

I remember reading Librarianship: An Introduction, by Chowdhury et al. and especially the part (6: 24) dealing with the “importance of marketing to libraries.” One assertion, drawing from several sources, went “we should adopt the language of the private sector and refer to library users as customers” (p.263). I was piqued that the language of the private sector would be so unquestionably adopted.

According to OED, Customer has the following meanings:

  1. a person who buys goods or services from a shop or business
  2. a person or thing of a specified kind that one has to deal with

That reference to the private sector and its language set me thinking. In the corporate setting in which I work, I guess definition # 2 is more appropriate but it is inextricably interwoven with definition # 1 and also connotes the jargon of quality management, which considers employees supporting other employees as service providers and internal customers, respectively. The idea at work here is to operationalize the concept of quality meeting closely requirements by or inferred from external customers.

At the end of the chain, the external customer purchases goods or services the organization retails. It somewhat confuses me to consider my colleagues as customers because the good working relationship that I may have with them, if viewed from the angle of an internal customer/ provider relationship, has an insidious tendency to deteriorate when the service that I offer is not up to what they expect it to be.

In an article by John Guaspari called rather provocatively, “Down with the Internal Customer,” Guaspari explains how the meaning of the phrase has been twisted out of shape, by equating internal customers with the tough nuts we can be ourselves as external customers when we buy goods/ services from outside providers.

The thinking goes that in that role we may behave any way we wish to because we have dearly paid for what we have bought. Unthinkingly and uncritically carrying that notion over to the corporate world cheapens roles occupied by info pros, dragging us to almost subservient positions and it equips unreasonable internal customers caught in difficult situations with the implacable, undeniable authority of the almighty external irascible customer.

I have never been that kind of external customer myself; have a very short fuse for those who don that role; and do not condone it in an enterprise setting.

Back to basics: internal customers represent a piece of the symbolic external customer, to whom we’ll sell a product eventually, that our joint efforts  as co-workers– or service products — make up. That’s good quality management geared to improving everyone’s performance including those in the back of the back office, by setting everyone’s sights on a series of quality standards all feeding into those of that symbolic being.

I believe that the internal customer-provider relationship is never to be equated with an opportunity for the customer to vent their frustration or to look down on fellow co-workers. Bottom line.

For a company to secure quality assurance or foster total quality improvement, co-workers need to refocus on and recognize those basics of quality management.

That reference set me thinking. I reflected about how we used the term in my environment.

The 90% Doxa

•February 28, 2010 • Leave a Comment

It has recently and (too?) frequently come to my attention that “90% of information is conveyed by titles” of peer-reviewed articles such as those found in bibliographic databases, say Scopus-like databases. Now I always am wary of rash judgments like these for several reasons (please note that I provide a bibliography as a separate attachment):

  1. The assertion fails to substantiate explicitly its underpinnings (i.e. where does it derive its own information from?) I have found nothing in the literature that could back this up (Mizzaro)
  2. Neither does it articulate the view it is, I believe, supposed to reflect; namely that 90% of what is stated in the full text of an article may be expressed through a title

Concerning 1., I’d be glad to have any of you point me to the right, reliable source that conclusively establishes that the information content, as it is called in the literature (Bary,149), can be worked out at 90%.

The assertion (2.) also begs the question of why it would matter to have as much information crammed into titles. Information content of document representations such as titles, abstracts and indexing terms is geared to letting readers know whether represented full-text articles are relevant enough that they justify perusal. To borrow Bary’s phrase (1293), document representations may be assessed in terms of how well they perform as clues to relevancy of full texts.

That issue begs another question: what is meant by relevancy? Relevance? Pertinence?

If, at the end of the day, the relevancy of articles is not meant or glanced at, then I fail to see what could be here. Surely, the assertion doesn’t refer to the amount of substantives used in titles as opposed to stop words (Tocatlian, 346) as that would not prove that 90% of the information contained in the full text represented by a title represents 90% of it.

Now there’s also the dimension of the effectiveness of document representations, when it comes to carrying information (see Byrne). But measuring effectiveness doesn’t equal to making approximately all of the content of a full text accessible through its title.

A review of the literature I could find on the topic indicates that abstracts rather than titles fare highly in readers’ estimate, when it comes to the relative importance (Barry, 1295)

Or could it be that by reading the title of an article one knows 90% of there’s to learn from a full-text article? That seems an unreasonable claim to make on the back of my own experience making a living looking for the right info.

Just a quick parting shot: Would you say that “The Devil in the Dark Chocolate,” by an unknown author and retrieved from PubMed [PMID: 18156011], is a good key to its aboutness, considering no abstract is available for it?

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: