Well, gosh, that sounds like a conspiracy theory, doesn't it?
"That can't be," the Twittering masses tell me. OK, well, two people from the Twittering masses. "If the two had anything to do with each other, how come SL search doesn't work?" they rightly reason.
"Well," I say, thinking out loud. "Maybe our sample is too small? Maybe the reason our search doesn't work is because it isn't meant to make our world work, it's meant to make a prototype for the real world and we're just guinea pigs?"
And hey, it's not like there's some evil dark plan that is somehow hatched diabolically. It's more like...the entire California Business Model. That model is all about making big platforms. Letting people join them for free, easily, and uploading crap. Then datamining the hell out of them. Platformists have been doing this now so heavily and so extensively that they don't probably even think about things like "the customer" or are motivated by notions like "the customer is already right".
We aren't their customers. We don't pay anything to them. We're just "there". Their real customers are marketing companies. So plans that involve prototyping on our backs could easily and effortlessly be developed without even being consciously assessed as somehow "exploitative".
Nobody will listen to me when I tell them to read 4.3 in the TOS.
I was reading along here about the latest Google caper on a very rare New York Times tech story -- and one about Google at that -- that had *comments open*. Those used to be completely rare. So rare I conducted campaigns with the Times Ombudsmen and the kids who edit the tech page about this, and uncovered the whole fascinating story of how they moderate. In time, they have opened up a few more stories than they used to, but still, there's a feeling that unlike Krugman, tech is "off limits".
In any event, as I read the story and comments, I felt like deja vu all over again. Hmmm, where had I read a story *just like this* about certain businesses getting screwed...about certain devs deciding what was a "good" or "bad" business...about people's livelihoods destroyed in a day...about old ladies using he analogy of "skimming the fat off chicken soup" to explain how they read Google before this, and that worked just fine, but now they can't find anything.
Hmmm...Well, readers of this blog and denizens of the SL forums will recall -- it's Second Life.
We've been through a year or more of absolute horrors -- our parcels that used to be in some key words at the top flipped to the bottom; completely irrelevant people getting to the top; worthy people finally getting to the top; obscure key words being rewarded; strange policies like "everything has to be in search on the lot" or "too many things are in search on the lot"; big sims get higher in search than little parcels; picks are used; picks aren't used -- the list of tweaks riotously harming the economy as people got shafted over and over again despite trying to be honest and just put up a search ad with a key word each weak -- well, I don't have to rehearse it here. It has traumatized the community. It has always felt contrived.
Who ordered this, really, and why?
In a page that might have been taken straight from a Sea Linden blog about what they're doing now to the larger project of the Google search in the outside world, Google execs wrote:
“This update is designed to reduce rankings for low-quality sites — sites which are low-value add for users, copy content from other Web sites or sites that are just not very useful,” Amit Singhal, a Google fellow, and Matt Cutts, who leads Google’s spam-fighting team, wrote in a company blog post. “At the same time, it will provide better rankings for high-quality sites — sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.
So naturally, everybody wants to know who gets to decide what "high-quality" means. Original content might be something you can tell from a bot or an automatic process or the links people link, but "thoughtful analysis" -- who determines that? How?
For example, if there ever was a site that "copies content" from other sites and "Just aren't very useful' on a good number of topics its Google's evil twin and beloved brother, Wikipedia. If what Google says is true -- they're now going to lower in search those pages that copy from others, then Wikipedia out to be pushed down tremendously. The instances of their plagiarism and shameless cutting and pasting from other sites and paraphrasing without reference (or token reference with a footnote to one aspect of another original piece) -- these are legion. So, Google, how about it? How's Wikipedia going to do?!
What this effort is mainly meant to do is to punish sites like eHow which aggregate other sites' content and seem to be just link farms. Except, eHow is useful to some people wanting lite fix -- not everything has to be the deep tekkie "thoughtful analysis" with "reports" that Googlians think it has to be.
The sophisticated geeks who HATE gaming (when it's not their own sort of gaming) and are super honest and super high quality oriented (when it's not their own porn site) don't like eHow and various other paid content link farms because they say they sell ads on other people's content.
Hmmm. That sounds like Google and Youtube, eh? Upload stuff, sell ads on other people's content.
I hate aggregator sites that grab my blog and sell ads on it -- and you know how you are. But there does seem to be a gray area and I would love to know whether they have banks and banks of live humans trained like Medieval monks sitting at consoles determining what is "thoughtful" or not.
Thinking about all this, I began to wonder: did Google come and use SL as a whole to prototype this algorithm they've come up with, so similar to our own?
Gosh, that sounds far-fetched eh? Would they deliberately do that? How? Would they pay Linden Lab to do that, or give them the GSA license for free? Or how?
Or -- this is much more likely -- did they just *pay attention* to how it was going? Watch the issues. See how they were developed. Gauge the public's reaction when stores that Google -- and Linden Lab -- thought were "cheap" and "tacky" and "should be removed" were removed -- to see how much live pain there was.
In what way did they watch? How does this work? If you use the GSA, that doesn't necessarily mean that all its results and issues and fixes get piped back to the Google Mother Ship, right? I mean, it's like buying a copy of Word -- that doesn't mean Microsoft grabs all your documents, right? So it wouldn't work that way...would it?
Well could it work more informally? Google engineers hang around the same bars. Or the guy who is the vendor of GSA to LL chats them up, solves problems with them, etc. I mean, if you buy or rent JIRA, it's not like you would never talk to the JIRA people and they might not know your issues, right?
So I just don't know how tight or loose, full or empty of data such a process is, and I ask. I get to ask, because it affects our livelihoods.
I'm not afraid of conspiracy theories, although contrary to popular belief, I don't promote them and I don't have a tin-foil hat made out of the foil wrapper of an egg salad sandwich. The theory of the FIC isn't a conspiracy theory -- it's a field report which then became further validated by Linden Lab's own on-the-record statements or statements from inworld chat transcripts (oops, maybe that's why those have to be discouraged now in the new TOS).
The problem is, we live in a closed, authoritarian society. We don't know what decisions are being made, we don't have much input into them, we don't know the reasoning for things -- it's all opaque. Less and less information is available; the world's statistics are getting more and more filtered. We don't have the figures we used to take for granted available any more.