Forum:Namespaces?

Everything About Fiction You Never Wanted to Know.
Forums: Index Wiki Talk

Yes, it's hard to decide how to do namespaces better, and it's easy to end up with a mess where media categories, tropes and works don't have a fixed hierarchy and appear to be in random order, much like on TVT.
But it's worse without them, because unless everything is written very clear, it's impossible to tell what is what from links, and it may or may not be obvious even halfway through the article: is it self-demonstrating, starting with an example, and other in media res - or is it a work? And then, there are common terms with several possible uses.
Consider this:
> (Move log); 07:42 . . Vorticity (Talk | contribs) moved page Space (Music) to Space ‎
Does anyone else see the incoming headaches?

TBeholder (talk) 05:19, 25 March 2014 (UTC)
Heh. We hashed over this issue pretty extensively already. Space, I admit, is one of the more questionable ones, here -- at some point, it will probably be moved to a science fiction trope. Still, there was a lot going into this:
  • We wanted to get closer to Wikipedia's names, so that the majority of the Wikipedia links would work without adjustments. There are certain cases where it will only get you close, but still only a click away. (No, we are not becoming wikipedia ... we are providing a link to go there if you want to see encyclopedic information. It's like outsourcing.)
  • The namespaces where essentially arbitrary anyway. And you'd have to remember which was which, anyway.
  • If a work had multiple adaptations, you'd have to know that if an anime was Anime First, started as a manga, or started as a light novel when making a link. Or was was it a Miyazaki anime over in the Film namespace? That sort of mess discourages casual editing, and possibly even casual browsing.
  • Most of what the namespaces were separating are handled by mediawiki categories.
  • You seem to be in the habit of checking the URL to find out what something is. But we have newer, better ways of doing it.
    • How can you tell if something is a trope or work, even if it's potholed? Well, you can't on TVT without highlighting the link and reading the URL. But here? You can just turn on one of the two Trope Highlighting gadgets. Links to Tropes and YMMV tropes appear to be a different color.
    • Want more info? Turn on the Navigation Popups gadget. A small preview of the page gives you an idea of its content.
      • Note: this Navigation Popups isn't actually working right now, but it's pretty slick on Wikipedia. Getting it to work on Mediawiki 1.22.3 is a priority for us.
      • I'm tempted to make it read Laconic instead, but those are often pretty bad...
  • Namespaces make for more work when editing. Of course, some of that can be negated by the Auto Complete gadget -- but the more namespaces, the more cluttered that gets. Requiring namespaces for everything makes for worse Huffman coding.

So, that was my thinking going into this. But you're seeing some headaches emerging from the idea of getting rid of namespaces.

But it's worse without them, because unless everything is written very clear,
it's impossible to tell what is what from links,
  • Unless you're using gadgets...
and it may or may not be obvious even halfway through the article:
is it self-demonstrating, starting with an example, and other in media res - or is it a work?
  • So how exactly do namespaces solve bad writing? I mean, those are pretty much problem pages anyway IMO. If we haven't gotten into "Topic is a X" territory by the second paragraph... ugh. It might be possible to modify the {{trope}} template to put a note that something is a trope in the title bar -- would that be useful? I suppose I could do something similar with major work namespaces.
And then, there are common terms with several possible uses.
  • I don't see how renames are going to effect confusion between similar terms, one way or the other. We're either going to need misambiguation pages, notes at the top of the page, and/or some separate namespaces on those anyway.

It's not that namespaces were a bad idea on TVT, given that they're using Pmwiki. But the wiki we use has a different software paradigm, and generally expects to be used more like wikipedia does. And they have pages on the majority of our works, too.

To be honest, you're not going to convince me to change it back, because I've invested >100 man hours into getting rid of the namespace suffixes. But if there are specific headaches that I'd be able to mitigate with software, then let me know, and I'll try to think of a way to address them.

Vorticity (talk) 06:14, 25 March 2014 (UTC)
Namespaces make for more work when editing.  Of course, some of that can be negated by the Auto Complete gadget -- but the more namespaces, the more cluttered that gets.
... I don't see how renames are going to effect confusion between similar terms, one way or the other.
  • Same as with coding: if errors are immediately obvious, it saves time even if you have to type slightly more. Namespaces make certain sorts of problems quite unlikely to appear in the first place: it's either an existing type in plain text or it's a red link.
    • Example: there's "Death_Star". If an editor had to use "Novels/Death_Star" while writing each of 112 links, would this be too much work to do? I'm reasonably sure they weren't written all at once, either. Now, want to bet how many times you will have to fix links using "[[Death Star]]" alone as a trope because someone guessed from context and didn't bother to check, or just mixed up names?
    • ...and since many, many tropes are named after works and things in works, and both use common words a lot, and easy to mix up, such confusion is going to be all-pervasive.
    • Autocomplete is the root of many and ugly evils. :) "More work" by copypasting 1 short word is not.
  • Lack of namespaces means every link strongly relies on the context. Which makes links on pages that don't give context at all almost useless - and this includes everything from "one word obvious trope" junk to big technical pages, especially search, including "What links here": unless an user already remembers what each name is, links have to be checked one by one to find anything particular. Namespaces instantly filter out lots of misses here, and for works referring to other works they may well narrow hundreds of found results down to 1-2 wanted right away, without checking every single one.
So how exactly do namespaces solve bad writing? 
  • Nothing solves bad writing, ever - or "fanfic" could not be used as an indecent word. ;) But a good site structure (whether via URL or tags) may have the inevitable problems contained and thus make them matter less. A reader searches for a trope page, and it's a work page - next, please! No need to dig through. Thus, less users are affected by this problem, and less problems affect a specific user. Also, this applies not only to bad writing as such - just as styles clarifying everything from the start aren't necessarily all good. Some ambiguous lines may be acceptable if a reader can determine whether it's most likely the right place before digging in.
It's not that namespaces were a bad idea on TVT, given that they're using Pmwiki.  But the wiki we use has a different software paradigm, and generally expects to be used more like wikipedia does.
  • The thing is, trope-wiki most likely will not, or even can not be used the same way. While wikipedia does provide easy reference to the One True Notion on anything that is Notable (in the One True...), beyond "surface" entries it can afford to make just about every common word a disambiguation page and still camelword-link to them. :) Trope-wiki by its very nature relies on internal cross-references all the time.
  • The implementation of namespaces is another matter. Not that TVT version was too good or too bad, just convenient enough to support structurization needs on basic level. "Name_(Media)" format is obviously less convenient, but still much better than nothing.
    • In fact, optimization on the "choose wiki engine" level is a joke. Any wiki or similar engine is inherently suboptimal for troping purpose due to the nature of content: its own structure being tables of entries - mostly (trope, work) - forcing it into linear text form is both prone to leaving pages too long for editing, and necessarily makes almost every entry duplicated to appear on both (trope) and (work) sides.
To be honest, you're not going to convince me to change it back, because I've invested >100 man hours into getting rid of the namespace suffixes.
  • "People should think, machine should work".(c) ;)
TBeholder (talk) 07:45, 25 March 2014 (UTC)

Same as with coding: if errors are immediately obvious, it saves time even if you have to type slightly more.

I agree with this sentiment, but writing is not the same thing as coding. (Although there are some very good arguments that they are quite similar.) When you make a mistake in coding, consequences can range from incorrect results to compiler errors to inadvertent features. But the first result is typically very bad. In the context of an incorrect wiki link going to not quite the right page... well, the user will get to read a slightly different page.

I don't think worrying about linking errors is as important of a goal as is natural writing. Most of our users are non-technical, or should be, so I want to follow the principle of least surprise. One of those surprises includes typing in the name of a work in the URL bar and having nothing come up. That's what red text means.

...and since many, many tropes are named after works and things in works,
and both use common words a lot, and easy to mix up, such confusion is going to be all-pervasive.

Cool story bro, but I think I need evidence on that one. As if such things weren't confusing already, and wouldn't continue to be confusing under any naming scenario. In an ideal world everyone will be able to classify things correctly and not complain about it, but I'm simply shooting for controlled chaos. Making most of the people happy most of the time is the goal.

Keep in mind that namespaces aren't going away where there is ambiguity. They're just unnecessary where there is none. And I don't feel like paying the penalty on every link is necessary for the relatively few cases where there is ambiguity. After all, if ambiguity is introduced later, we can always apply a bot to the links later on.

necessarily makes almost every entry duplicated to appear on both (trope) and (work) sides.

Sadly, I'm not in the mood to write *that* software monstrosity. Besides, I think natural language doesn't lend itself to that kind of double-linkage.

Nothing solves bad writing, ever - or "fanfic" could not be used as an indecent word.

I'm still waiting for someone to tell me how bad my own fanfic is. I've had a few good reviews but I don't believe them.

Autocomplete is the root of many and ugly evils. :) "More work" by copypasting 1 short word is not.
<snip>
"People should think, machine should work".(c) ;)

Heh, nice contradiction. I think I'm going to like you, TBeholder.

Vorticity (talk) 03:42, 26 March 2014 (UTC)

writing is not the same thing as coding. [...] 
When you make a mistake in coding, consequences can range from incorrect results to compiler errors to inadvertent features.  But the first result is typically very bad. In the context of an incorrect wiki link going to not quite the right page... well, the user will get to read a slightly different page.
  • Errors in cross-links require more prevention because rooting them out once they are in place is much harder. Text can be spellchecked, markup can be also syntax-validated, code can be also traced, but bad links to existing targets are detectable only by an actual reader, and even then unreliably. If an editor messed up links, this isn't obvious from context, it can be immediately noticed only by someone looking at the link and remembering what the linked page is about. Even following a link doesn't automatically alert the reader - at this point one still may mistake it for an obscure, but valid reference or joke. And once detected, they can be only removed, because who knows what the editor actually meant?
I don't think worrying about linking errors is as important of a goal as is natural writing.  Most of our users are non-technical, or should be, so I want to follow the principle of least surprise.  One of those surprises includes typing in the name of a work in the URL bar and having nothing come up.  That's what red text means.
  • What do you mean under "natural writing"? It's in [[wiki markup]] syntax to begin with.
  • The readers you expect to input it in URL bar can be just as well expected to at very least understand the basic idea of hierarchy separated by slashes.
  • Getting there by editing URL bar is still a matter of luck, and if failed, a reader falls back to search... and search is much more useful when the user can tell the general type of each page at a glance (and/or otherwise easily filter them, which is not implemented).
    • The page title is sensitive to capitalization ("Order of the Stick") either way.
    • There's existing name ambiguity anyway. E.g. look up on imdb.com - how many movies and companies named "Deja Vu" of all things? The article name ambiguity is added on top of this.
  • Of course, the best technical solution that could actually help in this would be "alert" by automatically formatted links to disambiguation pages and redirects much like it's done with red-links.
Cool story bro, but I think I need evidence on that one.  As if such things weren't confusing already, and wouldn't continue to be confusing under any naming scenario.
  • The last ambiguous "work vs. common word (possible article name)" encountered today: Monk. :D Do you really think it's exactly what everyone editing it into URL bar would mean?
  • For work vs. article clash - Death Star above. If you want a name even more begging to be mistaken or misused, consider the possibilities for Enter the Dragon.
  • For article vs. work clashes after we exclude common words - yes, it's not that often. But e.g. "Sgt. Rock" is an abbreviation for what was named after it.
    • Generally, Trope Namers are this waiting to happen: if something was a name proper or a catchy phrase in Book 1 of The Grey Mountains Saga, and was worth a mention, by Book 50 it's as likely as not to get on the cover, no? Serials being fountains of creativity with output pumped back to input. Or, for example, would you bet that in the next handful of Chiropterous Man comics one will not be named "Joker Jury" or "Bruce Wayne Held Hostage", or one of the other phrases trope-wikis grabbed?.. ;)
Heh, nice contradiction.

It's not a contradiction, it's an example. "Guess for me what I want to type" obviosly counts as throwing away IBM principle. :]

TBeholder (talk) 06:45, 4 April 2014 (UTC)
I'm still waiting for someone to tell me how bad my own fanfic is.  I've had a few good reviews but I don't believe them.

The narrow niche (and for crossovers it's more intersection of sets than union) is one of the reasons the reviews are limited and amount of meaningful ones is not great, yes. The sad thing, it's despite fanfics theoretically giving more opportunities to improve due to direct feedback from the pre-existent interested community. Of course, for the original/serial works it's not much better. And as often as not is more about slapping some epigony off Boris Vallejo (or worse) on the cover. Fanfics look uglier in average mostly because the publishers' editors usually perform at least basic checks - obviously, the worst writing doesn't vanish, it just stays between these poor souls, those who inflict it on them and wastebasket (real or sprite). Which is why my point is "nothing solves bad writing": you can only filter out some of it.

TBeholder (talk) 06:45, 4 April 2014 (UTC)

Well, let's start here:

What do you mean under "natural writing"? It's in [[wiki markup]] syntax to begin with.
Lack of namespaces means every link strongly relies on the context.

This is really the point where we differ. A markup language is not a programming language. It's a natural language document with some programming directives embedded. The main point of a markup language, whether it is wikicode, markdown, YAML, or HTML, is to stay out of the way of the content.

So in essence, wiki pages should stay as close to a natural language as possible. And if we don't have a trope named "monk", there's no reason I shouldn't be able to talk about the that one episode of Monk. English, and indeed all natural languages are highly context dependent. In that last sentence: The word "natural" isn't an antonym of "anthropogenic" as is it's common use. Nor if it was spoken, would you think the word "are" was the letter "R". Trying to remove context dependence from anything written in a human language is kind of silly and impossible.

In the end though, what this comes down to is values. You think that accuracy is the bigger concern, and I'm choosing speed. And I think that speed in this case correlates to ease of use.

The readers you expect to input it in URL bar
can be just as well expected to at very least
understand the basic idea of hierarchy separated by slashes

Do they? Really? By the way, it's not just the URL bar, but the search suggestions and Special:PrefixIndex. There's a difference between ability to learn and taking the time to learn. I just had this chat on IRC about 3 minutes ago:

kd: yeah I'd recommend cracking out the DBIC it's not like your sysadmins aren't going to be installing it next week

vorticity: Yeah, but then I'd have to learn DBIC *whine*

kd: no, it's fine. you'll want to kil me for about three days and then you'll want to buy me flowers and a mansion with a swimming pool

Once you know something, it's super-easy. But if you don't know it, it's a learning curve. And sure, it might be "better" once you know how to use the tools, but it's just something else to learn. (And yes, I'm going to learn DBIx::Class so don't call me a luddite.)

So then, you look at who the users of a wiki are. Mostly, they're fans. People don't come to the wiki to learn the wiki or get shit done; they come to talk about their favorite works, and do pattern matching. They're immersed in works of creative writing, not of creative coding. In other words, generally non-technical.

The biggest problem of TV Tropes is the learning curve. If you visit Ask The Tropers (the other ATT), you'll see this pattern repeated over and over again:

  1. Person asks slightly n00bish question
  2. Insulting reply by a mini-mod with a tangential answer
  3. Insulting reply by a real mod with a one wiki-word answer
  4. Lock.

Another antipattern there involves people tattling to moderators about some user who's making edits that don't quite conform to policy. While "LURK MOAR" is acceptable "policy" on 4chan, it's really not what we want the wiki to be about.

So back to namespaces. Like everything, we want to inviting to new users. And I'm going to posit that the principle of least surprise, when applied to new users, is not to have to sort everything into a namespaces when linking. That may mean more surprise for experienced users, but experienced users will only be surprised for a moment, and know how to fix the problem instantly.

And that's why I'm supporting speed over accuracy, because the penalty for inaccuracy is so very low. You talk about the cost of fixing mistakes from others, and how easy to spot they are. But if the mistake is never made in the first place due to less markup overhead, that does make it harder to spot. But maybe you're right, and the net result is that we end up with more incorrect links. The end result is that we have a wiki with more inaccuracies, which to me is not so bad.

The thing is, every set of data has some bogus elements in it. My technical background is in meteorology, where we receive mountains of data each day, much of it completely bogus. Data validation is its own specialty field, as is data assimilation (into computer models). When forecasters get things wrong, people can die. When our wiki gets things wrong? Meh. *shrug*

And I'm not certain that using medium-based namespaces are best. Here's some anime that I've watched. Can you tell, just by the name, which namespace they'd go in at TV Tropes? Remember, they all have an anime.

  • Popotan
  • Toradora!
  • Durarara!!
  • Revolutionary Girl Utena
  • Dai-Guard

The fourth is especially tricky, because there were manga and anime released at the same time, and the page covers both. If you don't know, it could be "Anime", "LightNovel", "VisualNovel", "Literature", "Manga", or "Anime". There's no way to be newbie friendly while at the same time requiring them to know the history of every little thing they see on the telly. Identifying a trope in a work doesn't required detailed knowledge, nor should it.

Some of your arguments are spurious. We're going to have to do disambiguation pages for "Deja Vu". It's not like we can avoid it there. Having multiple copies of a work, you'll just get "* (2011 film)" instead of "Film/* (2011)". Nor do we lose information when all of the medium information is held in the categories. It's just not part of the page name.

But for the most part, your arguments are absolutely correct. They're just not seeing the larger issues involved here. Honestly, I wish I had seen your arguments earlier, as they would have made the decision making process easier on me. All TVT says about the namespace migration is a line about how smart Eddie was to move the pages, and he would have been even smarter if they had done it earlier. *gag*

So you're gonna lose this argument (sorry), because the other admins more or less agree with me, and this kind of policy detail is my oeuvre. (GethN7 is about vision and features, Looney Toons is more about heart, history, and editing.) It's not for a lack of good arguments either. But I do promise that we will think about our moderation decisions and explain the logic to you, as well as presenting opposing viewpoints.

i.e. You're a great editor TBeholder, and I don't want to lose you over this. And thanks for the comments on the state of fanfiction, as that was pretty insightful. Looks like need to slap some ASCII Porn at the top of the document to get readers :o) Vorticity (talk) 21:53, 6 April 2014 (UTC)


because the other admins more or less agree with me

Agree, period. Differentiate only when necessary. I created the anime section on TVT and still sorta curate it here, and I couldn't tell you the right namespaces for those five examples Vorticity gives. And given the idiosyncracies of the search (regardless of the engine we're using for it), I'd rather go to a disambiguation page than pick the wrong work/trope page by accident or out of confusion.

Looney Toons is more about heart, history, and editing

You know, I never thought of it that way, but yeah, that's so very right. Looney Toons (talk) 17:09, 7 April 2014 (UTC)


This is really the point where we differ.  A markup language is not a programming language. [...] Nor if it was spoken, would you think the word "are" was the letter "R". Trying to remove context dependence from anything written in a human language is kind of silly and impossible. [...] So in essence, wiki pages should stay as close to a natural language as possible. 

Those are two opposite points.

  • "Trying to remove context dependence from anything written in a human language" is indeed silly and impossible. And trying to rely on inherently ambiguous syntax is also silly and impossible - it invites miscommunication, which is why languages and dialects more formal and unambiguous than natural ones appeared long before Ada Lovelace was born. That's what markup does here in the first place: allows enough of unambiguity that links are placed where they are supposed to be and point where they are supposed to point.
  • Trying to "stay as close to a natural language as possible"? It's technically possible to generate hyperlinks by parsing text in natural language, but obviously would be silly - ending in a disaster of semi-random fragments of text pointing to coincidental "related" pages via common words - i.e. how a vicious parody on wikiphrenics or a typical page on everything2.com (not sure if it's not the same) looks like.
So in essence, wiki pages should stay as close to a natural language as possible.  And if we don't have a trope named "monk", there's no reason I shouldn't be able to talk about the that one episode of Monk.

Beside the point. Your point was that "typing the name" is sine qua non of functionality. So here's an example. How many people would type "Monk" and expect it to lead there, and how many would expect one of tropes related to monks?..

Do they?  Really?  By the way, it's not just the URL bar, but the search suggestions and Special:PrefixIndex. 
  • Good, and? Special:PrefixIndex uses namespaces. If its use outside of technical pages is supposed to catch work-quotes-characters groups, it's just a reason to set it up according to desired functionality, that is as namespace groups for this purpose. Oh, wait, without namespaces there's no trivial way to pre-filter them and this would require either a needless database or realtime analysis with lots of disk crunching.
  • And you cannot even easily limit search suggestions to works or tropes is there's no easy way to distinguish them.
There's a difference between ability to learn and taking the time to learn. [...] People don't come to the wiki to learn the wiki or get shit done; they come to talk about their favorite works, 
  • And markup itself somehow is an innate skill? You use variable criteria.
  • On the practical side - if you want people used to one-button interfaces to have an easy time and don't mess the whole page, a good way to do it perhaps would be adding a widget on editing panel, with "reference to: (droplist of basic page types) (dynamic droplist of actually existing pages within the chosen type) (optional input field)" which would also correctly wrap it in [[-s (and ''-s for works). Leaving unvalidated ambiguous implicit defaults only ensures that malformed links will appear for one more reason and those will be harder to fix, and remain incorrect until someone who needs to understand them and knowing ow to do it properly notices and takes time to fix references.
 there were manga and anime released at the same time, and the page covers both.  If you don't know, it could be "Anime", "LightNovel", "VisualNovel", "Literature", "Manga", or "Anime".  There's no way to be newbie friendly while at the same time requiring them to know the history of every little thing they see on the telly.
  • This still limits the set of possible ambiguity greatly. A possible reference to "Manga" instead of "VisualNovel" lies close enough that it's trivial to notice and fix for anyone familiar with the work. misuse of Monk isn't in the same category..

I don't see how "requiring them to know the history of every little thing they see on the telly" is required.

  • Redirects.
  • Widgets, pre-validating (see above) or post-validating ("is <this> supposed to point at Work X 1, Work X 2 or Y?"), which is theoretically possible to generate automatically - e.g. via checking whether the linked page has a disambiguation list template and then directly importing that list, and/or simply using page sections (Trope_X#Manga) as a clue.
  • More widespread use of /Franchise namespace for multi-adaptation works.
  • Maybe less hair splitting in namespaces than that could help too? ;]
Some of your arguments are spurious.  We're going to have to do disambiguation pages for "Deja Vu".  It's not like we can avoid it there. Having multiple copies of a work, you'll just get "* (2011 film)" instead of "Film/* (2011)". 
  • In other words: naming actually is not going to be all "so simple even one-button mouse user can do it!" as you wish it to be. This advantage would fully appear only in an ideal case in laboratory conditions. But this approach still makes naming inconsistent and provides a reliable built-in mechanism for introducing hard-to-fix bad data and confusion. My argument is spurious, you say? =]
  • Speaking of which, there still are pages like /(Work)/Characters, /(Work)/Quotes and /(Trope)/Quotes and so on much like they were from TVTropes. Since they don't go away, namespaces exist for secondary pages, they just have an implicit default entry for main pages. What we have now is not as much streamlined as an inconsistent and more ambiguous version of the same.
    • IMO the original reverse hierarchy is more convenient - those pages are uniform in functionality (which may also be extended by templates in future, which may be easier with trivial ways to list by both main properties), and it's more convenient for saving. What creates problems out of nothing on TVT is not excess of data structurization, it's unchecked hair splitting (fandoms' common cold aggravated by recent influx of World of Wikipedia players), workflow issues (too much social networking you mentioned instead of useful instructions) and technical shortcomings (lack of disambiguation, redirects and/or other support).
i.e. You're a great editor TBeholder, and I don't want to lose you over this.

It's not quite that frustrating... I usually skip anything that contains a problem that could not be resolved on my own within one minute, however.

Latest in pointless headache fashions: reference(s) to Screwy Squirrel as in cartoons (at least from Simpleton Voice); Disney animated versions effectively posing as originals (ewww).
TBeholder (talk) 09:13, 21 April 2014 (EDT)