 
# 

# Casual Computing

  1.  1 Preface 
  2.  2 What's So Great about UNIX? 
    1.  2.1 Terminology 
    2.  2.2 Why I like POSIX 
    3.  2.3 Designed for Power 
    4.  2.4 Irresistible Flexible 
    5.  2.5 Defining POSIX 
    6.  2.6 Stability 
    7.  2.7 User Control 
  3.  3 Environment Variables 
    1.  3.1 Setting Environment Variables 
    2.  3.2 Spontaneous Environment Variable 
    3.  3.3 Uses 'Em if you Got 'Em 
  4.  4 Tabletop Gaming and Anarchist DIY Ideology 
    1.  4.1 User Mods 
    2.  4.2 Imagination 
    3.  4.3 Simplicity and Power Outages 
    4.  4.4 Analogue Programming 
    5.  4.5 Barrier to Entry 
    6.  4.6 Free to Game 
  5.  5 Python Pip 
    1.  5.1 Install Pip 
    2.  5.2 Use Pip 
    3.  5.3 Pip Locations 
  6.  6 A Look at Mageia 5 
    1.  6.1 Née Mandrake 
    2.  6.2 eee PC 
    3.  6.3 Mageia 
    4.  6.4 Packaging Concerns 
    5.  6.5 urpm 
    6.  6.6 No Sharp Edges 
    7.  6.7 Magickal Mageia 
  7.  7 Windows Sub System for Linux 
    1.  7.1 Yes but What Does it All Mean? 
    2.  7.2 Open Source Open Share 
    3.  7.3 What Can Open Source Do For Me? 
    4.  7.4 Love and Rent 
    5.  7.5 Shallow Source 
  8.  8 Role Playing Games 
    1.  8.1 Object Oriented Gaming 
    2.  8.2 Maths 
    3.  8.3 Design 
    4.  8.4 Planning 
    5.  8.5 Puzzles and Storytelling 
  9.  9 Why Does Linux Need Users? 
    1.  9.1 Good Advocating, Bro 
    2.  9.2 Advocacy or Solution 
    3.  9.3 Advocacy Through Action 
    4.  9.4 Staying Out of Each Other's Kitchens 
  10.  10 Virtualenv 
    1.  10.1 Install Pip 
    2.  10.2 Install virtualenv 
    3.  10.3 Using Virtualenv 
    4.  10.4 Fancy Setups 
  11.  11 Motherly Advice Against Using Apple 
    1.  11.1 Developer 
      1.  11.1.1 The SDK 
      2.  11.1.2 An SDK the Size of an OS 
      3.  11.1.3 What They Don't Tell You About Cocoa 
    2.  11.2 User 
      1.  11.2.1 Consumer Culture 
      2.  11.2.2 All Your Files Are Belong To Us 
      3.  11.2.3 Bloat OS 
      4.  11.2.4 HFS 
      5.  11.2.5 Price Tag 
    3.  11.3 Ideologist 
    4.  11.4 The Alternative 
  12.  12 Joy of Docbook 
    1.  12.1 Markup and Markdown 
      1.  12.1.1 XML and Docbook 
      2.  12.1.2 The Markdown Agenda 
    2.  12.2 When Lenience is Strict 
    3.  12.3 Docbook 
  13.  13 Gamepad on Linux 
    1.  13.1 Playstation vs. Xbox 
    2.  13.2 Xpad and Xboxdrv 
    3.  13.3 Configuration 
    4.  13.4 Intercepting Signals 
    5.  13.5 Steam Controller 
  14.  14 Points of No Return 
    1.  14.1 Reality Check 
      1.  14.1.1 No Authority 
      2.  14.1.2 Reaching the Point of No Return, And Liking It 
  15.  15 Advice Against Using Windows 
    1.  15.1 Developer 
      1.  15.1.1 Program Files and Architecture 
    2.  15.2 User 
      1.  15.2.1 Accomplishments 
      2.  15.2.2 The Missing OS 
      3.  15.2.3 Here Let Me Do That For You 
      4.  15.2.4 Stupid Boot 
      5.  15.2.5 Bare Metal 
  16.  16 Expectation and Intention 
    1.  16.1 Definition of "Right" 
    2.  16.2 Hardware Knowledge 
    3.  16.3 Custom Orders 
  17.  17 Analogue Random Number Generation 
    1.  17.1 How Random is Random? 
    2.  17.2 Unexpected Number Generation 
    3.  17.3 Big Numbers, Divided 
    4.  17.4 I Spy... 
    5.  17.5 Modulo 
    6.  17.6 Shifting Tables 
    7.  17.7 Pocket Dice Roller 
  18.  18 Classes and Functions 
    1.  18.1 No Functions, No Class 
    2.  18.2 Function 
    3.  18.3 First Class 
    4.  18.4 Getting Things Into a Class 
    5.  18.5 Making Functions in a Class Communicate 
      1.  18.5.1 Getting Data Back Out 
    6.  18.6 Now You Know 
  19.  19 Don't Panic 
    1.  19.1 Drivers 
    2.  19.2 Practise 
    3.  19.3 Stopping the Cliche 
  20.  20 Making the Simple Complex, and Charging for It 
    1.  20.1 How to Break Simplicity 
    2.  20.2 Simple is Pretty 
    3.  20.3 And Passing the Charges onto You 
    4.  20.4 Keep Hold 
  21.  21 OS or Distribution 
    1.  21.1 Your OS 
  22.  22 Solo RPG and Tabletop Gaming 
    1.  22.1 Dark oCCult 
      1.  22.1.1 Problems 
      2.  22.1.2 Mods 
      3.  22.1.3 Verdict 
    2.  22.2 Combat Heroes 
      1.  22.2.1 Problems 
      2.  22.2.2 Verdict 
    3.  22.3 Lone Wolf Saga 
      1.  22.3.1 Problems 
      2.  22.3.2 Mods 
      3.  22.3.3 Verdict 
    4.  22.4 Tunnels & Trolls and Dungeon Delvers 
      1.  22.4.1 Dungeon Delvers 
      2.  22.4.2 Problems 
      3.  22.4.3 Mods 
      4.  22.4.4 Verdict 
    5.  22.5 Non-Solo Games 
      1.  22.5.1 Mechanics and Game Design 
      2.  22.5.2 Cooperatives 
      3.  22.5.3 Solitaire Mods 
  23.  23 The I-Told-You-So Unix System Layout 
    1.  23.1 The Big UNIX I-Told-You-So 
    2.  23.2 Proof 
    3.  23.3 Too Many Files? 
    4.  23.4 Keep it Modular 
  24.  24 Linux is not an App 
    1.  24.1 Linux as a Software Appliance 
    2.  24.2 Embedded Linux 
    3.  24.3 Linux and Independence 
    4.  24.4 Linux is Efficient 
    5.  24.5 Linux is Open 
  25.  25 Alternative 
    1.  25.1 Problems of Persistence 
    2.  25.2 Problems of Scope 
    3.  25.3 Reclaiming the Term "Alternative" 
  26.  26 Source RPMs 
  27.  27 Voluntary Paywall 
    1.  27.1 Software 
    2.  27.2 Open Source 
    3.  27.3 Open for Business 
  28.  28 Proposal for Distributionism 
    1.  28.1 Mods 
    2.  28.2 Proof of Concepts 
  29.  29 Unix is not OS X 
    1.  29.1 So What's the Point? 
  30.  30 Nixstaller 
    1.  30.1 Download Nixstaller 
    2.  30.2 genproject 
    3.  30.3 Place the Payload Files 
    4.  30.4 Place UI Design Files 
      1.  30.4.1 Welcome Screen 
    5.  30.5 Modify config.lua 
    6.  30.6 Modify run.lua 
    7.  30.7 Run geninstall.sh 
  31.  31 Why I Love Linux, Developer Edition 
    1.  31.1 OS as a Platform 
    2.  31.2 Everything is an API 
    3.  31.3 Dev Tools Implicit 
    4.  31.4 installing apps, integrating with the system 
    5.  31.5 Open Source Licenses 
    6.  31.6 Port Everything 
    7.  31.7 Using the Source 
    8.  31.8 Free Agency 
  32.  32 rm 'rm' 
    1.  32.1 "When I Delete a File, I Want it Deleted" 
    2.  32.2 "I've Never Had a Problem with It" 
    3.  32.3 A Sane Replacement 
  33.  33 Non Geeks and Linux 
    1.  33.1 West Across the Ocean Sea 
    2.  33.2 Melodramatic Aside 
    3.  33.3 12 Bar Blues 
    4.  33.4 Geek With No Name 
    5.  33.5 Knowledge is Power, Blah blah blah 
  34.  34 Ode to Slackware 
    1.  34.1 The Classic Reason(s) 
    2.  34.2 Software and /Extras 
    3.  34.3 Upstream 
    4.  34.4 Packaging 
    5.  34.5 Track Record 
    6.  34.6 Paid 
    7.  34.7 Unix 
  35.  35 Colophon 
  36.  36 Attribution-ShareAlike 4.0 International 
    1.  36.1 Using Creative Commons Public Licenses 
    2.  36.2 Creative Commons Attribution-ShareAlike 4.0 International Public License 
    3.  36.3 Section 1 – Definitions. 
    4.  36.4 Section 2 – Scope. 
    5.  36.5 Section 3 – License Conditions. 
    6.  36.6 Section 4 – Sui Generis Database Rights. 
    7.  36.7 Section 5 – Disclaimer of Warranties and Limitation of Liability. 
    8.  36.8 Section 6 – Term and Termination. 
    9.  36.9 Section 7 – Other Terms and Conditions. 
    10.  36.10 Section 8 – Interpretation.

# 1 Preface

You like reading about technology? So do I. I follow lots of technical blogs and I check in with tech news websites, and I read magazines, the ones that are online with occasional book releases, as well as those that go straight to print. And of course there are plenty of user manuals and technical specifications out there.

But there's just something about a casual _book_. And there just aren't that many casual books about technology out there. Sometimes you'll find a good one, like Hazel Russman's The Charm of Linux or Mike Gancarz's Linux and the Unix Philosophy, or the enduring classic The Cathedral and the Bazaar, but aside from those, books about technology are usually appropriately terse; they cover a topic so that you can understand it.

I feel like there's a place, though, for tech books that aren't about the technology, exactly, and are more about tech culture. Instead of declaring that at the end of the book, you'll know how to do Foo, Bar, and Baz, I think there's a place for books that just talk about Foo, and maybe ruminate about why Bar is Bar, and maybe sneak in a nonchalant lesson or two about Baz.

My book Computing without Compromise was my first book of that type.

The book you're holding in your hand now is my second.

The point of this book is not to educate, convince, or sell. It's a book about life as an open source geek, and its intended audience is primarily other open source geeks. That means there are some pieces about Linux, another about generic Unix, a few about gaming, a few about programming. It's a loose collection, a mix, and it's yours to read and enjoy as you will.

And it is a book, not a blog and not a magazine. That's something I like a lot; I like the experience of picking up a book, and seeing exactly where it begins and where it ends. I can see the journey laid out before me, and I can decide whether I want to embark on it or not. You don't get that same experience when you start reading a blog that could last two weeks or ten years, or when you search the internet for articles about Linux or Open Source and get back millions and millions of results with no indication of where to begin. I like the journey that a book implies. I like picking up a book, looking at it face to face, and agreeing that we, the book and I, are going to spend time together. It's going to be quiet time, quality time, filled with silent conversations as we agree and disagree over the topics covered. It's going to be eye-opening and stimulating and it's going to cause some interesting mental detours, and maybe open a few doors to a few brand new ideas that the book didn't even mean to inspire.

Admittedly, that may or may not be your _exact_ experience with _this_ particular book (although, one can hope...), but ideally you'll find something enjoyable in this volume. It's not a technical manual, it's not a critical view of electrical engineering or an analysis of firmware implementation. You won't learn many hard facts or a bunch of useful skills. But hopefully you'll enjoy this collection of random thoughts, ideas, notes, and musings. If not, maybe you'll write your own book of thoughts and ideas and musings. If you do, let me know; I'll happily read it. Because I like casual tech books, and I think there ought to be more.

For your approval, I submit this one.

Thanks for reading.

  * Klaatu

# 2 What's So Great about UNIX?

I'm a fan of Unix.

I use Slackware Linux at home and as the foundation of my consulting "business" (the thing that makes me money; I put it in quotes because I don't have employees or anything, it's just how I sell my skills) because out of all the Linuxes, it strives to be the most Unix-like. I'm a fan of that.

But why do I like Unix?

## 2.1 Terminology

First, let's address terminology. I'm saying "unix" here very deliberately. I consider **Linux** to be a sub-set of a bigger and generic thing called **unix**. Technically that's incorrect; Linux is actually a _clone_ of a product called **UNIX** , written by two AT&T employees Dennis Ritchie and Ken Thompson (I got to meet the latter at an Ohio Linux Fest, in fact).

Really I should be saying **POSIX** , because that's the intentionally-generic, broad specification of anything that aspires to be very unixy without actually having been written, necessarily, by AT&T. The "problem" with the term POSIX is that it's not quite as recognisable, and to many who do recognise it, there's a connotation that we are referring to a set of technical specifications rather than, I guess, the tech specs, the tendencies, and the culture.

## 2.2 Why I like POSIX

I'm a fan of POSIX.

And when I say POSIX, I mean all of what I was just talking about: I mean the technical specifications that define what UNIX is, I mean the history of AT&T UNIX and Berkeley UNIX and that Finnish-borne indie clone, Linux, and I mean the computer hacker culture that has been in development since at least the 60s.

The thing is, if we imagined for a moment that all software was free and open source, I would still prefer to use Unix over anything else. You might think I'm only saying that because there _is_ nothing else, but that's not true; Haiku OS (a free implementation of BeOS) is open source and I don't use it. So, removing the question of independence from the equation, why do I like Unix?

Well, at this point, there's a degree of familiarity; I have very intentionally taught myself Unix, so I know its command syntax, I understand its basic concepts, I understand its file system layout, and I have a pretty clear list of troubleshooting steps to follow when I see errors. For that reason, even if, say, Windows were suddenly open sourced, I wouldn't be likely to switch to it.

But that has been very intentional; I had to come around to the POSIX way of thinking in order to achieve that degree of comfortable familiarity.

So a better question might be: why do I choose to like Unix?

## 2.3 Designed for Power

Since POSIX was designed to drive super computers, there's really never the sense that we users are "herding cats" when we ask a room full of computers to act in tandem. We are not _wrangling_ 10 or 20 or 100 or 1000 computers together to work toward a specific task because that's what they've been designed to do. And when we use a POSIX box as a personal computer, we're really just borrowing a node from the network and using it on a smaller scale.

And it _is_ a matter of scale. There's no "compatibility" layer sitting between the personal computer OS and the super computer OS; it's the same OS, seeing the big picture, all the time. That's not just a matter of esoteric pride, that actually means something: you can learn POSIX at home, you can set up a little two or three person network with whatever you have on hand, utilising native technologies like X11 and NFS, and build yourself a nice little unified computers system. And then you can take those same skills and walk into a big company and actually put them to use on 1000 computers, ideally for a paycheck. That's a big deal, and the very definition of scalability.

## 2.4 Irresistible Flexible

Everyone always told me that POSIX was flexible, and as a new user I found that to be mostly true; I was elated to find that I could choose pretty much _everything_ about how I interacted with my computer. Not only could I choose what my desktop looked like and how it operated, I could choose whether or not I even wanted a desktop at all. It was amazing.

But after a while, I started getting a little frustrated, because it seemed like all the really powerful and meaningful "flexibility" that everyone talked about was out of reach to me. Sure I could choose whether my desktop had a task-bar and start menu or just a blank screen with a right-click pop-up menu, and I could route audio in and out of applications, or run a modern OS on an "EOL" computer, but I wasn't a programmer, so how could I, a lowly user, take advantage of this "flexible" environment?

I got my answer once I started developing personal preferences with regards to what tools I used on a day to day basis. I didn't even notice, at first, because I'd become so accustomed to how POSIX deferred to my will, but it was sublimely easy to get my computers to use the applications and settings that I wanted it to use. It's so easy that you forget you configured them that way; it wasn't until I sat down at a mate's [Linux] computer at work that I fully realised just how heavily my user environment had really become. I'm not exaggerating when I say that I'd created my own personal OS; I mean, not literally, but if the pre-POSIX me were to see my post-POSIX computers, he would be utterly amazed. He would want that kind of power himself. And I'd have to tell him to wait a few years, but that it **would** sneak up on him.

It does sneak up on you; you don't really realise how easy it is to change how something works, on a whim, until you sit down at a non-POSIX (or functionally non-POSIX, if you're pedantic) and try to get the thing to comply with what you want. Sure, there may be some surface-level customisation, but mostly it's an uphill battle. Every time you want to change something. _You_ try to get your non-POSIX OS to agree to use a third party application as if it was native, and I don't meant use it _sometimes_ , I mean use it _all the time, for everything_. Even if you succeed, I'm taking bets on what'll happen after a routine update. And anything vaguely lower-level than the obvious GUI apps are even worse. There's just no underlying scheme for it, because it's not the intended use.

And here's the thing about me: I have a low level of tolerance for work-arounds. If you don't want me to do something with your technology, then about 7 times out of 10, I would rather just make my own (or find a pre-existing alternative) than work around your bad design.

POSIX design trends agree with me on this.

## 2.5 Defining POSIX

I use BSD from time to time, either on a spare laptop or on a server, depending on what's available to me, because it's a solid and beautifully designed system, and I like to experience various "flavours" of POSIX (I'd lumped Open Solaris into the mix at one point, although since Sun's demise, the open version of Solaris has suffered a few setbacks, but it's coming along).

One thing about POSIX that becomes apparent only after you have tried more than just one variety is the, well, POSIXness. Sure, you can try Slackware Linux and then give Red Hat a try, and you'll see differences in how some things work and you'll marvel at how there's still a familiar strain of "POSIXness" to them both. But try BSD or Solaris after you've used Linux for a while, and you're almost intimidated initially. The commands are different: some don't exist and others have the same name but react differently than they did on Linux. How can this be a good thing, and why am I saying there's consistency to POSIX?

Well, commands are commands. and the more comfortable you get with them, the less they feel like magical incantations and the more they feel like what they are: applications. They're the same as anything else you run on a computer, even though they live inside a little green-on-black text-only window. So the more you get comfortable with POSIX, the less frustrating commands become when you encounter a new one. It's just a matter of learning how it works, and then putting it to good use.

That's one side of consistency in POSIX that I'm talking about, but the other is how POSIX "works". Once you understand it, you understand it across the spectrum.

## 2.6 Stability

The stability in POSIX is something I value highly. I don't really understand the excitement a lot of people seem to express when new software versions are released. I theorise that it's a learned response from marketing campaigns; we're "supposed to be" excited, so we are, even though in a week's time we'll predictably be in tears because our computers are borked and we have deadlines to meet.

I prefer the stable computing experience, at least as my default. And POSIX is well-known for that. This extends, I have found, to the culture of POSIX; the ideal state of POSIX admin is one where everything is working as designed, and updates _fix_ problems. You don't find that in all sys admin culture; I can't prove that, because it's just been my experience, but I feel comfortable in saying that you'd be hard pressed to find a serious POSIX admin who didn't hold stability as one of the measures of the success of his network.

## 2.7 User Control

Granularity is a bit of a black hole; you can start talking about how much you love to have the final say in what happens on your computer, but you might go pale when 300 updates queue for your review. That aside, POSIX has a tendency to expose this granularity, and generally you're the one who decides both what you need to review, and what you're willing to hand off to the magic of upstream sources.

This goes for package management as well as for pretty much everything right up to your daily work itself. You want to blindly accept whatever your distributor gives you? you can do that. You prefer to manage a few important aspects, or customise your desktop, or use a special driver? you can do that. Or would you rather take control of every little decision needing to be made? well, you can also do that.

It's POSIX. You're the boss.

[EOF]

Made on Free Software.

# 3 Environment Variables

Setting environment variables on Unix isn't exactly something most users do on a daily basis, at least not consciously. Applications they run might set `env` variables for them, or they may set one or two in a startup script like `.bashrc` and then forget about it forever. Advanced users and developers might use them more frequently, because they're surprisingly powerful. You can sort of think of it as changing the basis of reality for the computer's operating environment. Sort of like Virtual Reality goggles for the OS.

For instance, if you set an environment variable one way, then logging into your Linux box drops you into a `bash` shell, and setting an environment variable another way drops you into `tcsh`. Seems simple, but multiply that by 2 or 5 or 20 and you can basically tell your computer to use a completely different set of libraries and even application versions (there goes the myth of not being able to have multiple versions of an application installed).

At my old job, we used environment variables to create little environments on-the-fly; we would script the creation of `env vars` (that's what the cool kids call environment variables) around a specific task or application set, and then use whatever set of variables depending on what we needed to do. This was especially useful since students and members might each want or need to use a different version of some application, depending on what class they were taking or what their project was using.

To sum up, it's surprisingly powerful.

## 3.1 Setting Environment Variables

You might imagine that such a powerful technology is probably very complex to use. I mean, if something has the power to change the very fabric of a computer's runtime, then it must be something that only advanced users should dare even trying to wield, right?

It's true, but nevertheless I'm going to give you the secret of this power, right now. Brace yourself.

On Unix, to set an environment variable (let's say you want to set FOO to "bar", as an example) for your current BASH or Bourne-style shell:

    $ export FOO='bar'

Or if you prefer a C-style shell like TCSH:

    $ setenv FOO bar

That is all.

Now no matter what you do _from within that shell_ , `FOO` will always equal 'bar'. What does that mean to a user? well, let's say that you are launching an application that can either use a really boring 2d graphics library to draw stuff on screen, or a really fancy library with 3d effects and graphic card acceleration. How might it decide which one to use? well, maybe it does some checks and figures it out, and then sets `FOO` to whatever it's using. So if the user wants to prompt the application to use one or the other, then setting `FOO` makes that decision for the application.

That's all there is to it. Export a variable; if anything uses that variable, and the variable's contents are sane, then it will be used.

Now, unfortunately, you can't just invent random environment variables and expect applications to use them. It would be nice if you can `set SPEED=reallyFast` and have all you applications work 10x faster than normal, but that's not how it works. You _can_ set anything to anything (including made-up nonsense variables) but for an environment variable to be useful, you would need to know what applications _look_ for.

This is not always identified clearly to you. Sometimes it is, in a project's documentation. Other times, it's reserved for developer documentation, or left undocumented because the programming language takes care of it by default. Python, for instance, has a bunch of environment variables that you can tap into, and even manipulate from within Python:

    $ python3
    >>> import sys
    >>> sys.path
    ['','/usr/lib/python3.4','/usr/lib/python3.4/plat-arm-linux-gnueabihf',
    '/usr/lib/python3.4/lib-dynload','/usr/local/lib/python3.4/dist-packages',
    '/usr/lib/python3/dist-packages']
    >>> import os
    >>> sys.path.append(os.path.join(expanduser("~"), "py"))
    >>> sys.path
    ['','/usr/lib/python3.4','/usr/lib/python3.4/plat-arm-linux-gnueabihf',
    '/usr/lib/python3.4/lib-dynload','/usr/local/lib/python3.4/dist-packages',
    '/usr/lib/python3/dist-packages','/home/klaatu/py']

And so on. Notice how similar it is to this:

    $ export PYTHONPATH="$PYTHONPATH:/home/klaatu/py"
    $ python3
    >>> import sys
    >>> sys.path
    ['','/home/klaatu/py','/home/klaatu','/usr/lib/python3.4',
    '/usr/lib/python3.4/plat-arm-linux-gnueabihf',
    '/usr/lib/python3.4/lib-dynload','/usr/local/lib/python3.4/dist-packages',
    '/usr/lib/python3/dist-packages']

It's essentially the same thing, although Python's IDLE interface behaves differently depending on how it's launched, so there are minor differences in the exact path, and in the first instance we appended the PATH and the second we implicitly pre-pended (because no PYTHONPATH variable existed until python was launched). The important thing to understand, though, is that if an environment variable exists **and** an application probes for it, then it gets used.

## 3.2 Spontaneous Environment Variable

In BASH, you also have the option of setting and using a variable at the same time that you launch an application or command that you want to use the variable:

    $ TWOLAME=yes ./audacity.SlackBuild

I use this trick frequently with SlackBuilds on Slackware because SlackBuilds are just shell scripts, meaning they can directly utilise environment variables as pre-defined variables. Python and C++ and most anything else can do the same thing, as long as the author bothers to check.

The logic is pretty simple; look at existing settings in the OS (actually, specifically the shell that launched the application), check to see if a value exists for a given variable, use that value if it does exist, and otherwise give it some default value or ask the user for a value by way of a prompt or whatever.

That's all an environment variable does, really; it provides a value for variables, which it makes available to any application asking for it.

Here's a really simple example that you can run right now in BASH:

    #!/bin/bash
    SAY=${SAY:-'hello world'}
    echo $SAY

Run it in BASH to see it in action:

    $ bash ./say.sh
    hello world
    $ SAY='i like lettuce' bash ./say.sh
    i like lettuce

or in a C shell:

    $ sh ./say.sh
    hello world
    $ setenv SAY whut
    $ sh ./say.sh
    whut

As you can probably surmise, the script checks to see if a variable `SAY` exists yet, and if so, what its value is. The script does this with some BASH shorthand, specifically the `${:-}` bit in the second line. You could do it more generically with some test statements, but if you're OK with using some shorthand, it certainly makes things quicker to write (and most modern unix systems either have BASH on them, or have access to BASH as needed).

## 3.3 Uses 'Em if you Got 'Em

Any application can check for environment variables and use them, so if you know what variables an application looks for (they usually tell you what they use, and there are some that are so universal that you can pretty much assume applications use them), you can define or change variables and have them affect how an application runs.

In theory, you could define lots of things prior to running an application, even where it looks to get libraries (as with PYTHONPATH). For compiled applications, you can change the `LD_LIBRARY_PATH` variable to pull the rug out from under a binary and tell it to go elsewhere for the libraries it has been told to use:

    $ LD_LIBRARY_PATH=/opt/lib64/ ~/bin/blender

Now, you could take this to an extreme level and manually define where every application looks to get its resources, or you can use it for exceptional cases (that's the most common use for this); it's all up to you and what your specific needs are. The point is, you should know that environment variables exist, why they exist, and how to use them.

And now you do.

[EOF] Made on Free Software.

# 4 Tabletop Gaming and Anarchist DIY Ideology

I'm the first to admit that sometimes I'm slow to pick up on the obvious, but I also humbly posit that when I finally do get the message, I listen and I listen _good_. Such has been the case with what we all seem to call "tabletop" gaming.

"Tabletop" gaming, of course, is the hip and trendy term for what used to be called "boards games", or card games, or RPG, and basically anything that was not done on a computer. A more accurate term, I guess, is "analogue" gaming.

I grew up with the usual assortment of board games. Nothing wrong with those, necessarily, but they were very much that predictable list of family-friendly, standard-issue board games.

After hearing murmurings about an alternate, members-only, super-secret world of cool board games, some mystical cross between hardcore RPG and traditional coffee table pastimes, I started investigating. I moved slowly; between childhood cautions against the "gateway to the occult" that was D&D, and flat out unfamiliarity with modern gaming, I hardly knew where to begin.

Eventually, I found an entry point in the form of Cards Against Humanity. That's an easy, mad-lib style game, and by all accounts a rip-off of a game called Apples-to-Apples (which I had vaguely heard of but never played). But it's an easy game to fall into, because the focus is on humour, and it doesn't much matter whether or not you're a "gamer"; it only requires that you have a twisted sense of humour and a high threshold for what offends you.

Most importantly, it's a Creative Commons game, so it encourages people to _not_ necessarily buy the game, but print it at home. Or else, buy it and make your own cards. The idea was totally unique to me. You mean you can add to the game yourself? Surely no other game ever had that notion.

How little I understood tabletop gaming culture!

## 4.1 User Mods

I'll blame technology for this, but for most of my life, until I discovered open source, it very rarely occurred to me that I could take something and use it in some other way than how its creator had intended. It's a shame I have to admit that, because it really is pretty obvious: when we're kids, we dig around in the kitchen cupboard for plastic containers so we can build makeshift cities for our Lego figures, or we collect all the pillows in the house so we can build forts for ourselves, but at least for me, getting into software-based tech was sort of a tunnel-vision inducing experience. You get some software, you're explicitly forbidden from figuring out how it works, you're told exactly how you may use it, and indeed if you attempt to abuse it, it crashes or ends up being frustratingly inflexible.

I should mention that open source software has none of these drawbacks, although the limitation of software and ROM chips are still there. You can modify and add to your open source system, but you're still bound within the parameters of it being a computer. It can only do so much.

With tabletop gaming, there are no such restrictions.

You can add to a game, modify the rules, share your new ideas with friends, invent an entirely new game. Unlike so much closed source software, you're buying the assets when you buy a game. They are yours to do with as you please. If you don't like the fact that there's a Joker card in your deck, set it aside. If you want the Joker to have special powers, you can grant it special powers. Nothing stands in your way.

That's what entertainment should be about: creativity. Sure, you can sit and enjoy someone else's creativity, and I very often do, but if I _want_ the opportunity to be creative, too, then a good entertainment system ought to allow for that.

Tabletop gaming does, and the best tabletop games encourage and foster it.

## 4.2 Imagination

It surprised me and appealed to me that tabletop gaming, as it turns out, has all the same thrill and, in many cases, immersion that a complex video game has, and indeed sometimes it even does better at these things than a fancy video game.

The obvious analogy is that tabletop gaming is to video games as books are to movies; allowing for some overlapping, one shows you the results of an artist's imagination and the other acts as a catalyst for your own.

At least at the time of this writing, I'm happy to admit that I'm a fairly superficial gamer. On a spectrum with **game theory** on one side and **pure fantasy** on the other, it's the fantasy that appeals most to me. I don't tend to play a game to analyse its competitive mechanics, I play because for a few hours I get to become a rogue or a necromancer or a dictator of a fictional country, and I visit new lands, and there's intrigue and stories and bits of stories that only get hinted at in the artwork of the cards. I enjoy most the games that provide that immersive imaginative experience the way books and radio plays do.

## 4.3 Simplicity and Power Outages

A lot of what I do in real life depends on modern conveniences, like electricity, and the assurance of the continued availability of microchips. I'd find myself with a lot of free time if all of those things suddenly vanished.

Or would I?

In fact, I'm perfectly happy with analog computing, and have been for a very long time. I used to compose music in notebooks during science class; sometimes I'd be off on a few notes but at worst I'd get the idea and general progression down on paper, so I could correct things when I got home to a synth. I wrote an entire novel by hand during lunch breaks in college. I designed the game mechanics of a card game in my head as I walked to and from work.

I don't really rely on computers as much as I like to think that I do, and tabletop gaming is one of the things that opened my eyes to that.

Computers and software, video games especially, come and go. They get released, people play them, and then the next big thing comes along and eventually he game becomes too old to play on a modern OS, or else it just loses its audience. Sure, some games escape this cycle, or they get re-made by some diligent hacker, or emulated by a clever programmer, but let's face it, sometimes the power goes out.

Sometimes you're sitting around at home in candlelight, wondering what to do with your evening without the internet or without Steam. Or you're out on an "unplugged" adventure somewhere, like, you know, _outdoors_ , without power. Or maybe you're just trying to shed an addiction to the obligatory updates and upgrades that comes with modern video games.

Tabletop games can have a similar life cycle, except for the obsolescence part. Sure, people may lose interest, or the game might go out of print, but the game itself still exists. The scene has been set, the story written, the game mechanics proscribed. As long as someone remembers it, it's still a game, and from there, you never know what might happen or who might randomly discover it, revive it, and enjoy the heck out of it. I'm really not speaking theoretically, here; I did just that with an old game called Dark Cults from 1983: I heard people talking about it, I found an archive of its rules and deck, I re-implemented it at home, and released my work in hopes of others finding it and enjoying it.

And best of all, no electricity or microchips required. Sure, it helps to make cool-looking cards on your computer and to print them out on card stock, but in terms of what we'll all do after the Apocalypse, analogue gaming ranks pretty high on my list.

## 4.4 Analogue Programming

I have long been interested in analogue programming. It's something that has always fascinated me, all the way back to the time as a kid that I attempted to make a computer out of cardboard (it did not work). I want to be able to "program" something without the microprocessor, which is strange coming from a guy who never could understand why his primary school teachers kept insisting that the abacus was actually a calculator.

Even when I started analogue gaming in earnest, I didn't see games as a program. They were games, with clever rules and cool art. I didn't know what made them so fun, and I didn't really give it much thought.

Then one day I was playing Cards Against Humanity with my girlfriend; we'd been looking for that ever-elusive 2-player game so we could spend an evening a week playing tabletop games together. **Cards Against Humanity** requires at least 3 players, but in the rulebook it notes that as a workaround you can deal a hand to "Rando Calrissian", a fake player that receives and submits random cards each turn. Amazingly, the results are just as often hilarious as they are utterly perplexing, but I have been in games with Rando where Rando has very nearly won the game. Such is randomness.

As rule mods go, that works (and in the context of that game, it's very effective), but it's not what you'd call "elegant". The mod boils down to: "if you need another player, play two hands".

When I revived the 80s game Dark Cults as Dark Occult, my world was changed when I discovered the 1985 expansion pack, which included modified rules for a single-player game. As a player, the scheme is so elegant that the game may as well have been meant for solo play. But from a programming standpoint, it's ingenious and elegant.

I don't know how much source code you've read, but I've read a lot, and it can best be described as a kind of surf: you wade through some tides, picking out the important spots on the horizon, and then you get to the part where it all comes together, and it's like you're riding on a wave. All that preparation and necessary groundwork finally pays off; it all makes sense, it's beautiful, you understand what the code does. That's how the single player rulebook felt; you see how the card decks are getting divided, you understand that there are percentages involved, you see how challenges are being mitigated and how randomness plays the part of your opponent, but then you deal the cards and actually play it. Then it just downright humbles you.

 **Dark Cults** in single-player mode _programs_ the card deck to provide a reliably entertaining game progression for the player. Between mixing cards carefully into sub-decks and then dictating (per card) what new card may be drawn by the player, the single player version is literally an analogue program consisting of decks of cards and the player. Together, they form an analogue computer destined to play a game, with the human's imagination constructing a story to go along with it (which is the point of the game; it's a story-telling RPG-alike).

Tabletop gaming provides many of the same challenges, the same mechanics, and the same rewards, as computer games. Not just for the player, but for the designer, as well. You can "program" the situation, set the stage for an imaginary environment, and forge the same obstacles and challenges, just like in a computer game.

## 4.5 Barrier to Entry

The concept of a _barrier to entry_ is tricky to agree upon because, strictly speaking, there's always some barrier to entry. I do remember trying to create my own board game as a kid, for instance, and failing miserably. My attempt failed partly because I was unsatisfied with how crude my board game looked when compared to a professionally printed game, but more importantly because I had no concept of game theory. All I knew was that board games involved a spinny thing, maybe some dice, cards, and player pieces; I didn't know how to make those things interesting or useful, and I certainly had no concept of how to manipulatively pit players against one another or against a fake common enemy.

So that's the barrier to entry for tabletop gaming, and it shouldn't be discounted. Not everyone wants to mod a game or invent their own game, or even learn the rules and play a game. And those who do must learn a little something about social engineering and game theory.

Then again, those barriers are all within the person, not the medium.

Computer games have a larger barrier to entry, because to play them, you need a computer, and very often you need a particularly powerful computer. To create them, you have to learn the syntax of a programming language, and potentially some art applications, and whatever else you need for the game to come alive. The end result may be better by some measure, but if your primary goal is to create a game and not necessarily a video game, then none of that should be considered a hard requirement.

It's also easier to dabble in the "programming" of tabletop games. Truth is, most people who have played a tabletop game already have dabbled. It's why there's such a thing as _house rules_ ; people play a game and they discover that some mechanic isn't quite working for them, so they change it to something that feels better, or they flat out get bored reading the rulebook and settle for enough of the rules that make the game go.

You can't quite do that as much with video games. There are mods for the everyday user; any PC gamer with a neon-lit keyboard and "pro gamer" 12-button mouse will tell you how easy it is to download and install some arbitrary user mod from the net and, at least in some small way, change the way they play the game. And more advanced users might be able to hack their way around some exposed parts of a game's lua scripts (or similar), but mostly video games don't encourage modification, and in the worst cases they strictly forbid it.

## 4.6 Free to Game

I'm not putting on rose-coloured glasses and saying that tabletop gaming is a blissful place where sharing and intellectual freedom reign supreme. Even if it is, I suspect that were tabletop gaming a multi-billion dollar industry the way video games are now, then efforts to enforce copyright and thought would also increase. However, you can't really alter the fact that tabletop gaming happens dynamically. The program is proscribed, but only the half written in the rulebook. The other half are the players, and they are and will remain free agents.

I'm also not saying that video games are bad, or not as fun. Video games have a wealth of strengths that tabletop games do not have, at least not built-in.

Tabletop gaming is a brave new world that isn't really new, and you should try it, if you haven't yet. Check out sites like BoardGameGeek.com, read a few reviews, and get started. Head on over to Drive Thru Cards and try some print-and-play games. Discover some new games. It might take you to places you never expected.

[EOF]

Made on Free Software.

# 5 Python Pip

It seems like everybody's got a package manager these days, and Python's no exception. In a nutshell: `pip` provides an easy, Python-native, platform independent method to install Python modules.

On one hand, that seems like a collision of domain; after all, your Linux distribution probably has a package manager, so if you also have a Python package manager, then how do you mitigate one with the other? What happens if you `pip install` pyFoo on Monday and then your package manager stubbornly tries to pull in its own copy of pyFoo on Tuesday as part of some other package?

Well, I run Slackware, so my OS's package manager never collides with anything unless I personally do it myself. But on, for instance, Mageia or CentOS, it could be a problem; in that case, it would be up to me to either defer to `urpm` or `dnf` (since they're more automated than `pip`), or use `pip` but manage my own `sys.path` carefully and just ignore the fact that I actually may have two versions of the same module installed.

You know your own system better than I do, so you decide what you want to do.

However you manage it, `pip` offers an easy way to download and install Python modules, and since it's the community portal for Python, you're likely to get the most updated versions promptly (even before your distribution), and it's all neat and tidy within Python so you never have to get out of the Python mindset.

And better yet, `pip` can run at the user level, so if you haven't got admin privileges for the computer that you use, you can still use `pip` to download and install Python modules to your home directory only.

## 5.1 Install Pip

It should come as no surprise, but the official pip docs provide the install procedure for `pip`. At this point, it's most likely that `pip` is already on your system, so probably a simple update will do:

    $ su -c 'pip install -U pip'

or if you're an unprivileged user:

    $ pip install -u --user pip

If `pip` is not installed at all, then you should download the install script and run it (use the `--user` flag for a local install).

    $ python get-pip.py [--user]

## 5.2 Use Pip

I tend to think of `pip` as the `yum` of Python. I say it's like `yum` rather than some other package manager because it's got the same kind of indifference about where a package is located when you ask it to install something. It's happy to download a package from the internet and then install it, or it can take a package that you feed it from your home directory and install that. It doesn't care, so you have some flexibility yourself in how you acquire the things you want to install.

The centralised online repository of Python packages is PyPI, the "Python Package Index". You can go to the pypi.python.org, browse what's available, download packages, and install at your leisure.

Similar to packages in a Linux repository, packages from PyPI may come in two different formats: a source code archive or as a pre-compiled bundle called a "wheel" (in a `.whl` container).

To install something from PyPI, either reference a package by name:

    $ pip install --user foo

Or point it at a package on your system:

    $ pip install --user ~/Downloads/foo-0.6-py2-none-any.whl

## 5.3 Pip Locations

Pip can run as a system tool or as a local user-based tool. If you run it with root privileges, it gets installed and installs packages to standard system locations, such as `/usr/bin` or `/usr/local/bin` or however your distribution provides it. If you run it with the `--user` flag, then `pip` gets installed to `$HOME/.local/bin` and your Python modules go to `$HOME/.local/lib`.

There are versions of `pip` for both a Python 2.x and 3.x, so if you want to install a Python3 modules, you probably need to run `pip3`. Python 2.x uses `pip`. This is true unless you have installed everything manually and have changed things around; in that case, obviously, you know your system best.

That's it. Simple, effective, and easy.

[EOF]

Made on Free Software.

# 6 A Look at Mageia 5

Mandrake Linux was a fairly early distribution (it was around since 2000, at least), and it was a real heavy-hitter for quite a while. I wasn't using Linux at the time, but from what I've heard, it was one of the first distributions to come up with the idea of a subscription model for updates and extra packages, which I think still has some merit (and apparently, so does everyone else, since many software vendors are now implementing that same idea).

## 6.1 Née Mandrake

Late in 2016, I embarked on an academic exercise over the weekends to install one Linux release from each year since 1993. I didn't come to discover Linux until 2006 or thereabouts, so this exercise gave me an emulated sense of just how far Linux has come, and how quickly or slowly. Making it easy to configure X, for example, I felt happened too slowly. I'm not saying I could have done better, but it wouldn't have hurt to have some more helpful tools to get `XF86Config` sorted, and it amazes me still that nobody ever bothered making `XF86Config` or `xorg.conf` any less confusing; as config files go, the X config surely is the worst I've ever experienced to this day.

Everything else, though, moved quickly and gracefully, and generally it was smooth sailing since 1998 or so. Even so, when I got round to installing Mandrake 8.0, I was _flabbergasted_. And I don't think I often use that term, so I really really mean it.

Mandrake 8.0 was a thing of beauty. It installed easily, it was friendly and helpful without getting in the way, it made partitioning easy and, most importantly, it let me configure and then test my screen and mouse. And then it was installed, and I found myself looking at a beautiful KDE desktop with helpful shortcuts to documentation and support channels.

All of this is pretty standard now, but from what I'd seen up to that point, it was the most professional and genteel presentation I'd seen. If I'd been shopping for Linux back in 2001 ( _o, would that I could have been so progressive_ ), I feel pretty sure I'd have become a Mandrake user for life (I wouldn't have been savvy enough, then, to understand Slackware; I admit it).

 Mandrake 8.0 from 2001

At some point before I had even heard of Linux, it merged with another distribution called Conectiva, forming Mandriva Linux.

I don't have a whole lot of history with this bloodline, but in fact it was the first Linux I ever booted. I had gotten the boot DVD from the back of an introduction-to-Linux book. I was eager to try Linux, so I tried booting off of it mostly to prove to myself that such a thing really, truly existed. I'd been reading up on Linux, researching it, trying to wrap my head around this idea that yes, there _really are_ operating systems that come free in the back of books or on the covers of magazines (and online!), and they could be the way one uses a computer. Every day. Forever!

For the record, Mandriva did boot the old second-hand computer I was borrowing at the time. I remember stars and very noble-looking penguins, and that's about it. Well, that and

 _My God, it's full of stars!_

Later, I tried the Metisse windowing system that they were working on integrating into their desktop. It was really neat, but by then I was settled on Slackware (I wasn't any _good_ with it, but I was using it and I wasn't going to quit) so I was only a tourist in Mandriva and didn't stay for long (I think I had to install it on a temporary partition to see the effects, but that was the extent of it).

## 6.2 eee PC

The real game-changer, not just for me but for the entire computer industry, was the humble eeePC: arguably the world's first "netbook" and the "low end" power-hitter that spawned an entire sub-genre of laptops. It was small (7 inch screen originally, later to be expanded to 9 or 10), low-powered, lightweight, ultra-portable, and pre-loaded with something called Xandros Linux.

I bought in around the second or third iteration, mere months from the time that Mandriva had released the first third party (non-Xandros, I mean) eeePC-optimised Linux OS; it was Mandriva specially tuned and, most importantly, correctly sized, for the eeePC. It was (at least, as I remember it) the only distribution that went to the trouble to do that, at the time, so I decided it would be the one to use.

I installed it, and loved it. It was polished, it looked great, everything worked, it was RPM-based so it "felt" like Fedora (that's a meaningless statement; it just means that when I went to install packages, they ended in `rpm`, which superficially comforted me), it had all the packages I could want (I think I used an "extras" repository for the really good stuff, but that was no more outrageous than using rpmfusion).

I didn't move away from Mandriva on that laptop until years later when it became my _only_ computer, at which point I basically had to switch it over to Slackware. Not Mandriva's fault.

## 6.3 Mageia

Since then, Mandriva has dissolved as a business but has spawned the Mageia and OpenMandriva distributions. I've checked in on Mageia once a year or so since it started, mostly just to see what they've been up to, but I guess it escaped my attention that I might actually run it on something. But recently I got a computer that I wanted to get up and running quickly for every day use; I didn't need anything special, just a Linux desktop with the usual tools, which I'd mostly use to SSH into my servers or my workstation at the other end of the apartment.

Fact is, Mageia is as good now as it ever has been. All this time, I'd kinda been wondering where there was a nice and stable, long-term supported Linux distribution for the "normal" desktop users in an RPM world. Not that there's anything wrong with, for instance, Fedora, but one of its self-stated goals is to be cutting edge, and Red Hat doesn't tend to offer much in the way of even run-of-the-mill multimedia packaging without source RPM re-builds.

(SUSE is another very notable and important distribution to consider, when looking for a stable and long-term desktop, but some times you just have to flip a coin and make a choice.)

The thing about running Mageia (or SUSE or Slackware, for that matter) is that some of us get nervous about wandering out too far into the fringe. Some Linux users worry that they're already far enough into the unknown by using Linux in the first place that they feel they ought to use a distribution that gets a lot of press and, in theory, a better chance of support from the rest of the computing world.

Here are some things I've come to notice about Mageia, and about using "obscure" (quotation marks are _very_ deliberate there) Linux distributions.

## 6.4 Packaging Concerns

I admit, I did have some fear that somehow I would come across some application that just wasn't packaged for Mageia. It kept me from using Mageia on computers at the community center where I volunteer (I maintain Mint there, because I was just sure, at the time, that everything and everyone supported Ubuntu).

Believe me, I have no technical reason to fear that, since I'm happy to build my own RPM or do a manual compile and install as needed. But to my mind, if I'm going to "cheat" (by Slacker standards) and use a pre-packaged Linux system on a machine, then I may as well expect everything to be pre-packaged. Otherwise, what's the point?

When it comes to applications, there are two scenarios any Linux user fears:

  1. You look in your software repository and discover that no one has gotten round to packaging up the one application you use every day, all day (or they have, but it's such an old version that it may as well not be there at all).

  2. You go to a random website and find a cool new application that advertises Linux support and find that they only offer Ubuntu-branded `.deb` packages (and a Fedora-branded `.rpm` from four years ago).

These are basically the same issue; they only feel different because they occur in different scenarios. One is a repo let-down (you curse your distribution, Linux, and yourself for not knowing how to package stuff, and so on) and the other is an Internet let-down (you curse the world for being anti-Linux, the software vendor for not bothering to make ONE entry in their Makefile, and so on).

The fact is, I've not used even one Linux distribution where I've not had to do a little package work-around. I don't _like_ that, but since each distribution insists on being its own central source for all one-click installable packages, that's what happens. This does _not_ happen on Slackware because it packages _nothing_ ; it lets you do that on your own, with `makepkg`. It also would not happen as often] if we all could agree on one packaging format, or else we agreed that [AppImage was actually a pretty brilliant idea after all.

But we don't, so in "pre-packaged" Linux, you're guaranteed to encounter "third party" software that didn't get the memo about how Linux packages stuff, or hasn't been packaged at all, or has a new release that hasn't gotten added to the repo yet. I know this to be true, because I offer several applications myself that are not packaged for any Linux distributions, so if nothing else I can speak as a non-prepackaged software distributor.

Non-packaged software is not only inevitable, but it's also healthy. The whole point of open source is to enable anyone to write software for any OS, so it's just not possible, and it shouldn't be possible, to package everything ever written.

If it was, then open source would be failing.

So don't be afraid of finding software you want to run that has no one-click installer for your distribution. Here are a few sure-fire ways to deal with packages your distribution does not have:

  * Find a package in a related or similar distribution, or on the project's own site, explode it, and re-purpose the resulting bits and pieces.

  * Find a source RPM and re-build it for yourself (a lot easier than you think).

  * Download the source code, compile it, and use checkinstall to create a personal RPM.

  * Find a "generic Linux" tarball, if available, and put it in `opt` or `~/bin` or similar and run it from there.

So far, I've encountered one application that was not apparently packaged for Mageia, but I found it in a Fedora repository and was able to adapt it with a little bit of effort. There are a few others that I can think of that probably aren't packaged, or else are not packaged at their latest versions, but the same holds true for them; adapt what you find, and do it so you never have to do it again.

Problem solved.

## 6.5 urpm

Of course, a lot of software is already packaged by the Mageia distribution, and ready for you to install in a few clicks from the Mageia Control Center.

I tend to avoid the GUI installers and go straight to a shell.

Mageia, like Mandriva and Mandrake before it, uses the `urpm` command to search for, install, and manage software. Now, my path along the Linux shores was such that I encountered Yellow Dog and Fedora well before I even knew that `rpm` was a command and not just a packaging format, so I cut my packaging teeth on `yum`. As a result, `urpmi` was very unique to me when I first encountered it on the eeePC. It felt more direct, closer related to `rpm` than even `yum`.

I quickly came to enjoy it, even though there are a few quirks in design (the fact that `urpmi.addmedia` exists, rather than `urpm --add-media` or something more intuitive, for instance) that other RPM packaging systems (like `yum` or SUSE's `zypper`), have smoothed out.

On the whole, `urpmi` is a nifty little interface to common `rpm` commands plus as sane dependency resolution policies as you can hope for.

Some of its nicer features:

  * `urpmq --fuzzy` performs an rpm fuzzy query.
  * `urpmq --whatprovides` tells you what package provides a given library or executable.
  * `urpmi` installs a package, whether it's a local file or something that it finds in a repository.
  * `urpmi --no-install` downloads packages to `/var/cache/urpmi/rpms` but does not install them.
  * `urpme` **e** rases a package (like `rpm -e`)
  * `urpmf` **finds** a file contained in a package (for example, `urpmf libfoo.so` finds all occurrences of libfoo.so in RPM packages; like `find /var/log/packages/ -type f | grep foo` in Slackware)

Pretty simple, and for a change it's quite nice to have the commands split into several different invocations. It seems odd even to me to praise this, because one of the things I never got my head around in the old apt-get system was the split between `apt-cache` and `apt-get` and so on, much less the options consigned to each (you have to admit that `apt-get remove` is a stupidly confusing concept; how do I _get_ a _remove_?). The new `apt` command fixes most of the confusion for me, but `urpm` uses sane options and intuitive invocations.

## 6.6 No Sharp Edges

One of the things I love about Fedora is its fearless development, and that's also what keeps me away from it when I want a stable system. Not that you can't run Fedora as a stable distro; I've done it before, myself, and it works. The problem for me is the temptation to push the boundary, to keep getting closer to that cutting edge, and then regretting it. So I acknowledge that I'm the problem, not Fedora.

And anyway, you've got to admit that Fedora's repositories are filled to the brim with all the fun applications a busy computerist could ever want.

RHEL (or CentOS, or SL, and so on) is stable but lacks the packages.

Mageia, in short, has both. It's a reasonably long-term release cycle, so you won't find yourself needing to upgrade every 6 months, and yet its repository is reasonably well-stocked.

## 6.7 Magickal Mageia

I regard Mageia as a sort of safer version of Fedora. It doesn't have Red Hat's or SUSE's length of support, but it has a notably better repository than RHEL or even EPEL.

And in terms of staying-power, you have to admit that a distro that traces its development back to 2000 or so is pretty good.

So if you're looking for something between the breakneck pace of a cutting-edge distro and the long-term support of Enterprise-class desktops, Mageia is a nice, solid option.

[EOF]

Made on Free Software.

# 7 Windows Sub System for Linux

At the end of March 2016, Microsoft announced that they had written a compatibility layer, technically known as WSL (Windows SubSystem for Linux) or something like that, so that ELF binaries (the format used by Linux, and so colloquially synonymous to "Linux binaries") could run on top of Windows. I don't usually comment on Microsoft news, because mostly it doesn't affect me, and also it's more subject to change than UNIX tips and general tech trends. In a sense, though, this really is about a tech trend, so I thought I'd jot down a few notes on this development from the viewpoint of someone watching it happen, if for no other reason than for a historical perspective.

First a little background:

In 2015, Microsoft went public and announced that "Microsoft Loves Linux", and more broadly that Microsoft loves open source.

In October 2015, I went to the **All Things Open** conference and was, like everyone in attendance, surprised to see that Microsoft had purchased a booth there. Only, no one was actually surprised; Microsoft had gone out and bought a brand new costume just for this event, handing out stickers carefully designed to be quirky and non-traditional (a cat riding a T-Rex, whilst waving a Windows flag, wow they really GET my generation!), and super casual booth reps, it was like They really were One Of Us!

And then at the end of March 2016, during their own **Build 2016** conference, they unveiled their Linux Subsystem, built into the NT kernel and usable as a beta in "dev mode" on Windows. What it does, specifically, is translate POSIX sys calls to what NT listens for and responds accordingly, and provides an abstraction layer so that filesystem locations in Windows are recognised logically by the POSIX processes (a Windows user folder becomes `$HOME`, external devices end up in `/media`, and so on).

None of it is emulated, none of it is virtualised. It's a native NT library (or libraries, probably) running the exact same compiled ELF binary that you would run on Debian.

To sum up: Windows now runs native Linux binaries.

Exciting stuff, right?

## 7.1 Yes but What Does it All Mean?

There are lots of implications around MS condoning, and presumably in some way supporting, ELF binaries.

You might see it as an admission of defeat; Windows binaries can be unofficially run on Linux through the sheer power of independent hacker dedication, and now Linux binaries get corporate sponsorship on Windows. Not only did Linux do it first and without help, but the great and powerful Microsoft has bowed to pressure and finally admitted that there just might be something to this whole open source Linux GNU software thing. Like, something big and serious.

Or you might say that Windows is attempting to gobble up open source culture. After all, Windows tried it the one way for decades and decades, fighting tooth and nail to bury Linux as an amateur, insecure, risky operating system that no one professional should ever dare tamper with. And now they're suddenly embracing it, as if they themselves just discovered this cool new thing that they just gotta bring to the world as a Genuine Microsoft product. That's what Linux is, right? Microsoft Lite? Just a development tool, for geeks who haven't yet learned Microsoft yet.

In fact, maybe you're a developer who has worked on some of those GNU and BSD tools that Microsoft is now gleefully running on Windows without ever having given your project a dime for your troubles. Not that you needed a dime, but after all the insults, the anti-Linux marketing, the subterfuge, the sabotage, the veiled threats, and the outright lawsuits, one might have thought Microsoft would have a shred of decency so much as to at least publicly apologise, and at best contribute something tangible (and no, KVM modules so that Azure will run Linux doesn't count).

Or maybe you see it as the perfect win-win for everyone involved. Microsoft adopts some GNU applications, and Linux gets a stamp of approval from what many see as the very definition of what computing is, plus some pretty major exposure, worldwide, by way of the OS that ships on most computers made.

And maybe you're someone who stands to benefit from this; if you're someone who's stuck on Windows all day or even prefers to use Windows (in which case, heaven knows how you found this site, of all sites), then having access to actual GNU applications on Windows might be a big benefit to you. I don't personally see the convenience of running ELF binaries through a translation library on Windows (it seems like compiled native binaries would be more useful, like with Cygwin).

Yes, there are lots of ways to see this development, lots of angles from which to look at it, and lots of emotions.

So what are we to make of it all?

## 7.2 Open Source Open Share

The GPL, BSD, and similar licenses state and even encourage people and companies to take code and use it, and just as importantly, to share it. So sharing software isn't just in the culture of Open Source, it's written in our by-laws. Microsoft ingesting ELF binaries is, really, a good thing. It's exactly what Open Source has been encouraging people to do. I've written several articles myself on how frustrating it is that closed-source companies refuse to integrate free code, especially when that free code would make things so much easier for users and support staff.

In short, we _want_ Microsoft to do this. We want Microsoft users (willing or otherwise) to have the option, at least, to own their data plus the _code_ they used to create their data. We even want it to be an option for them to look at that source code and learn from it or improve it, if that's what they want. And last but not least, we want them to be able to take their data and the tools they used to make it and migrate to any OS they want to migrate to, depending on their needs and interests; no vendor lock in, no data held hostage.

## 7.3 What Can Open Source Do For Me?

I realise that Microsoft, as a corporation, is by no means a newbie, but their attitude to open source, so far, very much has them looking like one. This isn't unusual; most of us take this approach at first. I didn't download VLC for the first time as a kid in order to see what I could do for the project, I downloaded it because it did a better job of playing media than what I had.

It feels, to me, like a they had a meeting at Microsoft and looked at the latest Github survey results and saw how many developers Linux had, and asked "How can we get some of those developers on our side?"

So, I imagine, they looked at all the data and tried to find what it was that was attracting all these really good open source devs to Linux. Ultimately, someone must have chimed in and said "well, they seem to like the words Open and Source..." and everyone got really excited and decided that that was the key. Attract open source developers by offering open...source.

You're probably seeing the bizarre irony here. Developers have been developing open source tech on Windows for decades, just as much as they develop open source on any other platform. Why? because it's open access. Anyone can contribute. It isn't rocket science. If you don't let people see your code, people can't work on it, and sometimes they can't even work _with_ it.

So Microsoft started flirting with officially-sanctioned open source.

Great!

Only, let's look at what they are open sourcing. Dev tools, almost invariably. That seems pretty obvious, because if you want to attract developers to your platform, then you would open source the tools they'll need, but it's essentially a nice way of saying "come on in, give us stuff for free!"

Or, in the case of Azure, "come on in, use free stuff on our closed platform!"

Don't get me wrong, the open source code is appreciated. But the expectation is blatantly clear; the source is open for developers, so that developers can program on and for Windows. Windows itself, for instance, is not open source. You can't see that code. The major Microsoft applications aren't open source. You don't get those. You get the tool shed out back, so that you can contribute into Microsoft, but you still can't come inside.

And even the Linux subsystem library is a little awkward, if you stop with the obligatory clichés and deliberate double-takes ("Linux on Windows! I can't wait to see what Richard Stallman has to say about that!"). Why did Microsoft need Canonical developers to work with them (or _for_ them?) to make the GNU-applications-on-Windows project a reality? it's open source, so why not just take the code and integrate it as needed? If they needed expertise, why not hire it? It feels odd that two money-driven companies would suddenly team up to work on bringing free source code to developers when all that's required is for one of those companies to download some code, build, integrate, and release.

But most of all, and most embarrassing, is the fact that the WSL (the Linux subsystem translation libraries) is not, itself, open source.

So, to be clear:

  1. Microsoft loves open source.
  2. Canonical is vendor of open source.
  3. They work together to bring open source to developers...
  4. By producing closed-source code.

Obviously, the attitude here is still very much _what can you do for me?_

If you're an open source developer, you're welcome at Microsoft. Just as long as you purchase the Windows license, and work within Microsoft's guidelines. Whatever you do, don't think about the closed source surrounding you; just put the stickers on your laptops like the "real hackers" do, and listen to the webinars about how much Microsoft adores open source, and get back to work.

## 7.4 Love and Rent

It's nice that Microsoft loves open source, but love, as they say, does not pay the rent, nor does paying our rent prove love. So what do we want?

Well, for decades Microsoft has held, essentially, a monopoly (not, I am told by the higher courts of "Justice", legally) in the computer market. How do I know this? Because everything (not literally) gets released first and foremost for Windows. Every device you purchase comes bundled with a Windows driver. None of this is "bad", but the fact that it's exclusive very much _is_ bad.

So the fact that Microsoft tells us that they love Open Source means very little, and will continue to mean little of substance, until we start getting more than just love notes. I'm not saying Microsoft has to write the drivers for us, but then again I also didn't ask Microsoft to tell the world how much they adore us Open Source townspeople, either. My point is simply that if they are so very interested in the success of Open Source, why not use their 900 pound gorilla to influence vendors to play nice across platforms? shoehorn an industry standard into drivers that insist upon open source code or at least open APIs that can be shipped to customers legally, without threat of legal action over firmware blobs and video codecs.

Why not provide Open Source with meaningful contribution?

Well, Microsoft sees no reason to provide meaningful contribution to open source because in the eyes of Microsoft, open source is already doing frightening well. That's the thing, I think, that many of us open source users can't comprehend: _they_ are scared of _us_. Who'd have thought?

## 7.5 Shallow Source

The thing is, all of this is fine. If that's how Microsoft wants to treat open source, it's perfectly within their rights to do so. But I think they'll find it's very shallow. I don't mean intellectually shallow, I mean it will be like a shallow fountain. Open source thrives because it is a give-and-take economy. A developer takes code from one place, improves it, releases it, it gets improved or it's used as a training ground for someone else, and something new springs up from that. It is thrives because people come to it earnestly with honest intentions, and they participate. When they don't, it usually ends badly; they get disgusted with how open source isn't making them enough money, or because people are looking at their code and pointing out errors, or changing it out from under them. And they leave. And that's not what anyone wants. Participation is valued, and it's also rewarding.

But if Microsoft expects to treat open source as nothing but a freely flowing idea-fountain, from which they can draw and draw without ever giving anything meaningful back, I think they'll find, eventually, the experience to be unfulfilling. It's not going to bring them the influx of passionate developers that it wants, it's not going to make their closed source code any better, it's not going to make people trust them or love them more. It's going to disappoint, and possibly disgust them, and they're going to back away, disenfranchised by their own lack of participation.

And nobody really wants that.

So, hopefully Microsoft will surprise me, and start contributing back in a meaningful and "selfless" way. They own their code, so it's up to them to open it. They also, by extension, own the much of the data their users produce; maybe a good start in earning trust would be to use open standards, open formats, and honest interest in user freedom and choice.

Open source isn't a club. Microsoft doesn't have to buy its way in, or trick us into believing that they are Open. You get called "open source" by producing open source; start doing that, and things will have truly changed.

[EOF]

Made on Free Software.

# 8 Role Playing Games

The ("tabletop", or "pen and paper") RPG game is one of those fine traditions that nevertheless has had quite a lot of unfair pre-conceptions and clichés applied to it for pretty much its entire lifespan. As a child, I was forbidden from playing them, because a popular notion at the time was that they were a thin veil over a literal gateway into the occult. Later, it became popular to claim that the only people who played RPG were male social misfits, usually with the subtext that they were latent predators of one sort or another. And then everyone fell back on it being boring, not flashy enough, not high tech; why would anyone play a tabletop game when there are MMORPG's out there, waiting to accept your micro transaction payments plus subscription fees?

So, really, what is tabletop RPG? why is it something that people do and enjoy?

Well, I think it was one of the early D&D rulebooks that described role playing games as "cops and robbers, with rules". I find that to be true in spirit if not in implementation, since children playing cops and robbers are usually running around the backyard chaotically while kids and adults playing an RPG are usually sitting round a table. Still, the basic idea remains that a bunch of people are participating in telling a common story, but instead of arguing over whether that imaginary laser beam hit the imaginary bad guy or not, there's dice and character stats that makes the decision for everyone.

While that's accurate, there's just so much more to the RPG experience. Let's look at it from all of its many angles.

## 8.1 Object Oriented Gaming

I've written about how tabletop gaming is a form of analogue programming, and that's one of its most appealing aspects, for me (well, one of many). With card games and board games, it's hard to miss the programming aspect of it, as long as you look for it. But with an RPG, I think there's the danger of confusing what's going on with a simple interactive Choose-Your-Own-Adventure book, sort of like one of those BASIC turtle-logo scripts (or Scratch, depending on your age) you wrote in school, as opposed to, say, a Python or C++ application.

I've always enjoyed the RPG experience, but I never truly understood the complexity of it until I came to understand that it isn't just scripting. All the players agree on a set of pre-defined rules, random number generators in the form of dice are used to introduce entropy into the progression of the game, and a GM acts as a database API to provide access to NPCs, maps, and story (and general management, including seemingly out-of-scope decisions, random number tie-breakers, and so on).

And you can really break that down to some fascinating conceptualisations, if you think about it long enough. For instance, if we just lazily think about an RPG as four people sat around a table stepping through a standard campaign, then it sort of looks like this:

  * Bob, an orc knight with 12 HP and a war hammer
  * Alice, a elven knight with 12 HP and a long sword
  * Ted, a wizard with 10 HP and a book of spells
  * Carol, the DM

And the code they step through would sort of look like this:

    00 PLAYER=(knight,orc)
    05 LOCATION=dungeon
    10 MONSTER=1
    15 FIGHT? GOTO 30
    20 RUN?   GOTO 25
    25 PRINT("You are hit as you run, but survive.")
    30 PRINT("You attack, but the monster kills you.")
    35 END

Pretty easy to follow. Maybe a little too easy; after all, any "choice" they have has to have been accounted for and scripted in advance.

But if we think of these players in terms of how the _game_ sees them, it would be a little bit more like this, in Python-like pseudo code:

    class Player():
          def __init__(self,CL,RC):
              self.CLASS=CL
              self.RACE=RC
          def defense(self,HP,THAC0,WEAPON,DEX):
              if self.RACE == "elf" and WEAPON == "bow":
                 RACE_BONUS=1
              elif self.RACE == "orc" and WEAPON == "hammer":
                 RACE_BONUS=1
              else:
                 RACE_BONUS=0
              HP=HP
              AC=THAC0+DEX
              DAMAGE=DEX+RACE_BONUS

And so on.

Now you don't need a script. Any obstacle in a narrative can be dropped in front of the players, and it's a function of mathematics and player choice that decides the outcome.

In our first pseudo-code example, our players had two choices when encountering a monster; they could run or fight. You could add in four other options (play dead, beg for mercy, look to see if the monster has a thorn in its paw making it grumpy, tickle the monster) and it's still a multiple choice test.

With our RPG pseudo code (actually not pseudo; it will run, even though it doesn't do much), you get levels of choices:

 **GM** A monster appears. What do you want to do?

 **Player 1** Wield my war hammer and attack.

 **Player 2** Climb a tree and wield my bow.

 **Player 3** Run.

 **GM** The monster attacks you with [rolls dice] 12 damage.

 **Player 1** My armour plus natural strength is 14, no effect.

 **Player 2 & 3** Out of range.

 **Player 1** I attack with [rolls dice] 15 damage + 1 race bonus.

And so on. (Just wait until the group finds out that the monster has rock-like skin making him basically immune to war hammer blows, and they have to call the wizard back to soften it up.)

Heck, the obstacle doesn't even have to be a monster. It doesn't even have to be an obstacle. The GM could drop a completely innocent person in front of the players, and they could choose to converse or attack or barter or _anything_.

And to be clear, the **GM** doesn't actually have to be a human. It could be a deck of cards or a book; once you start introducing concepts that depend on some random result (like a roll of the die), or math problems (calculating level differentials, armor effects, and so on), there's a degree of non-linear storytelling that emerges from the events themselves. Sure, it may not be very nuanced, and it become predictable without any real live intervention, but it's dynamic, it's flexible, it's non-linear. The system is self-contained, and will continue to function for as long as you decide to use it.

## 8.2 Maths

Love it or hate it, there's an inherent gamefication to a collection of numbers. Let's face it, if you see this:

    (strength + weapon) - dexterity/encumbered + roll = hit

You _want_ to see what happens if you plug in random numbers (as long as you don't personally have to do figuring, unless you're into doing calculation for fun).

If the terms in the equation mean something to you, then you want it even more, and the complexity of solving the puzzle increases: how can you maximise your hit? Sure, a lot of it's going to come from strength, but if you choose an especially powerful weapon then you can give your attack quite a boost. Unless, that is, the weapon is too big and encumbers your movement.

So you start looking over your inventory sheet, trying to do the math in your head, except that you never can complete the equation because there's that darned dice roll you have yet to do. But you ignore the unknown factor and come up with a strategy, wielding the weapon you think best, and then you roll. Maybe you're happy with your roll, or maybe you're cursing under your breath; either way, there's the attack and you look to the GM anxiously to find out how much damage you've actually done.

## 8.3 Design

Part of the fun of reading books and playing games, at least for me, is finding out what kind of world I've been dropped into. I'm not talking about the story-line, I'm talking about the environment. What's the map like? what's the terrain? what's the landscape look like at sunset? what can I see when I look up at the sky? what are the cultures of the people in this world? what do they think about? what do they fight about? what do they want to achieve in their lives?

In several of my favourite books, the _world_ is as much a character as the people being written about, and in RPG's it's almost always, at least unofficially, a sub-plot that you must learn about your environment as you go. Every RPG is exploratory, in part because you may not know the setting at all, but even once you become familiar with a world, there are no visuals. You _have_ to dig deep into your surroundings, deliberately, because you can't just glance around and absent-mindedly take it all in.

There's a lot of artistry that goes into building a world, so part of the RPG experience is the act of walking into an imaginary art gallery, and taking it all in. Some of the art is the stuff that the game designers put there, provided by the core rulebook and add on modules, but the other bits and pieces belong to you, the players. You're building this world as you go, using whatever the GM has planned, what the players prompt the GM to invent on the spot, and, of course, what the players themselves force into existence.

## 8.4 Planning

The RPG designers aren't the only ones plotting out maps and filling in statistic charts. For many people, myself included, part of the fun of RPG is the prep work. It's the part where you sit down, or take a stroll outside, and ponder a character build. It's like meeting someone briefly at a café and hearing their life story; you get all the background information, you hear about all of the interests and passions and fears and hopes, and then, in an RPG, you get also see their future.

And that's fun; all the planning, the rolling for stats to determine how strong they are, or how dexterous, or how charismatic, and just how good their skills are with different weapons or equipment.

Then there's the studying and the research; opening up the core rulebook, reading about all the nuances of how interactions are calculated, how combat is implemented, and all the extra stuff about the history of the world and the cities you'll be visiting, the races there, the cultures.

I say it's "studying" but really it's like a sort of inverted book that you read from the outside in.

And for a GM, the process is even more of the same; the research, the planning, _plus_ the writing of a campaign.

It's all the part of the game that isn't in the game, and it can keep you entertained for weeks. I literally read rulebooks for games that I'll probably never play, the same way you might browse a travel brochure for a country you'll likely never visit.

## 8.5 Puzzles and Storytelling

RPG storytelling is more than just non-linear narrative, and it's more than just keeping the GM awake with requests for on-the-fly descriptions of tapestries and the contents of the desk junk drawer. The stories in an RPG evolve out of necessity; the GM or the campaign book might have a specific story proscribed, but if the players don't ever rub the lamp to reveal the vengeful djinn or if they don't go into get into the boat to sail to the forgotten island, then the story, at the very least, has to be adapted.

But it's more than even that. There are other stories happening during an RPG: the story of the players themselves. Any RPG player knows that deep down, the _most important_ tale being told during the game in the story of their own character's development as a person. Those are the stories we humans truly care about. Does the level 1 wizard's apprentice ever get a wand of her own? does the level 5 knight ever master the use of an orc's war hammer? Does the level 2 orc grow up to be the leader of the horde? Does the level 6 witch lower her emotional defenses and finally find true love?

Those are the kinds of stories that no one can tell, but anyone can experience.

The gateway to those experiences is RPG.

[EOF]

Made on Free Software.

# 9 Why Does Linux Need Users?

I have never really been comfortable with the idea of "advocacy", a concept that in the "real" world usually means that a person is kept busy standing on a soapbox proclaiming how important something is, and how other people should do something about it. The idea, I think, is that a critical mass of "advocates" for something will be reached, forcing the hand of those in power, who will, essentially, be bullied into performing some action to fulfill the desire of the advocates.

Advocacy doesn't just exist in politics, it's a tech trend too. I see it crop up in several areas, and it's a tricky thing because on the surface it feels good.

## 9.1 Good Advocating, Bro

Advocacy feels good, because it's (sometimes) a positive statement in favour of something that someone likes (or, it's the kind of negative that makes you happy, but I'm going to ignore that style of advocacy). This seems like a good thing, because it's a voice in support of your cause, and that provides a confident boost and adds to your perceived number.

The problem is, it doesn't add to your number. It only _appears_ to add to your number, but there's no actual support.

(It also presumes that there is a "critical mass" that, once reached, will influence higher powers, meaning that it assumes a higher power that will grant you your wish, which is a whole other problem. Surely the idea is not to petition for something as much as it is to "be the change", to quote a probable inspirational poster.)

I've read several tech journalists and "tech leaders" who go on and on about how "Linux is a fine operating system, nearly ready for prime time" or how "Linux is important" and "we love Linux". It's usually something that's said with the unspoken caveat "but I don't know much about it, and I'm not going to learn". The larger message is that Linux, while powerful, de-centralised, independent, and probably the best solution for technology going forward, is something I'm not going to investigate personally because I'm comfortable with the status quo, however broken down it may be, so I'll keep using and promoting the same stuff I've been using and promoting for decades.

What's the point? It's like a parent smoking a cigarette whilst telling their child not to smoke. The words and actions don't match up, and one is more powerful than the other.

## 9.2 Advocacy or Solution

Advocacy is like a sympathy card; it's a nice thought, but does nothing to solve the problem.

And the problem that we should be solving in the tech world is ineffectiveness in technology. Why are we promoting closed-source tech that has a monetary barrier-to-entry, that keeps secrets from its users, and that most people are being forced to learn and use in order to interact with schools and government?

There are some things that just ought to be provided to people, and those things compound along with society. Started out that civilisation offered the basics; safety in numbers, shared resources such as food and water, shelter, and an agreed location for waste disposal. Pretty good reasons to join in the whole "civilisation" trend. But the world got more complex, and expectations increased. Running water, plumbing, a sense of community, arts and entertainment, health, and other "luxuries" got rolled into the baseline for the definition of "civilisation".

Today, of course, the baseline includes things like education, job opportunities, and the gateway to it all: technology.

Why, then, are we treating technology the same way water companies are starting to treat water: if you bottle it up and sell it, you can charge people for clean water instead of providing it to people _gratis_ , which is one of the main selling points for even bothering with the notion of a civilised society in the first place.

This is a bad idea.

Advocating for a better solution is all but empty if you don't actually move toward that solution. "Yes, somebody should do that" pales in comparison to being the one doing it.

## 9.3 Advocacy Through Action

Don't get me wrong: it makes me happy to see a tutorial site using a screenshot of Ubuntu when providing a tutorial, and it warms my heart when sites acknowledge that yes! Linux exists, and that people use it for real life, everyday things!

That's great. I'm grateful for that. I don't want to lose it, and I'm not by any means saying that only people who use Linux are allowed to release software for or show support for or acknowledge or comment on Linux.

But there's support and there's advocacy. When software vendors and websites and articles "support" Linux, what they usually mean is that they do not shut Linux out, and enable Linux users to use their product (whether the product is a tutorial with relevant screenshots, or software with a compatible download, or a widget on the site that requires some browser plugin, or whatever). That kind of support is real, and it's what pretty much exactly what I mean when I say you shouldn't advocate but _use_. In those cases, the "use" is reversed; instead of using Linux, they are enabling their product to be used on Linux.

Things that are not that are examples of empty advocacy, which has several problems that pop up in unexpected places.

Sure, the obvious danger is that even a statement of quote-support-unquote becomes, at best, a back-handed compliment. "Linux is great! who knows? maybe someday it'll grow up to be a real operating system that I can actually use!" That's often the implication when Linux is praised _but not used_ , because there's the implication that something needs to happen _in Linux_ for it to be usable. But this ignores, and possibly even spites, the fact that there are millions of users consciously and intentionally using Linux (I'm not counting the billions of people who "use" Linux between the internet, Android, embedded devices, and so on; I am talking about desktop users specifically) on a daily basis to do real work. What was intended as advocacy ends up invalidating millions of users, as if they don't even exist; how can they exist? Linux isn't yet usable (but hey, "it's nice, and it's getting close!").

A nice caveat to avoid this would be something like "Linux is neat and as soon as I stop being a lazy technologist, I'm going to use it!"

More subtle, though, are the good-intentioned advocates, often from within the pool of existing Linux users, but also from outside. People get excited about open source, and in that excitement sometimes things get praised very loudly. But not all praise is informed praise; some of it is just plain old fashioned over-enthusiasm. To make matters worse, the internet is the internet and so the praise gets amplified if enough content-echo sites pick up on it. Why is this a problem?

The problem is that uninformed support is empty support. If you sing the praises of, for instance, GIMP because you heard that it was a really snazzy graphics application, to someone looking for an alternative to Adobe's closed source Illustrator, then you'd be doing a great disservice to the user and the software (because GIMP has only rudimentary path support; the correct answer would be Inkscape). I've read a great number of articles on how great `$FOO` software is, based entirely on the claims `$FOO` itself makes about itself, and a handful of screenshots that certainly make it look like it's powerful. This doesn't do anyone any good, and in fact threatens to make a horrible first-impression on users looking to switch to open source.

If you don't use something, you don't know it; if you don't know something, you can't advocate for it. You can recommend that someone investigate it as a possible solution, but you should not position it as their Problem Solved.

The same holds true when people advocate Linux without using it. Naturally, I personally believe that Linux can be recommended in practically any case, but even I will admit that there are sometimes conditions going along with that recommendation. But if I am not using Linux on a daily basis, I can't intelligently provide those conditional warnings or notes. Worse yet, I might provide incorrect warnings.

Let's say someone is thinking of switching to an open source, Linux-based solution. As a non-user, you give them a list of things to take into consideration based on your general understanding of the current state of Linux, combined with that one time you tried Linux, plus maybe a quick web search. So you suggest Linux, and tell them a few general cautionary notes plus a few added tips, painting a completely incorrect picture of the current state of Linux. Then they try Linux expecting one thing, only to find that what you told them would work doesn't, and what you told them wouldn't work is a one-click install. Are they doing something very right? or very wrong?

Or maybe you're a Linux user, and you've done your research, at least to the point that you have been able. For instance, I don't personally do CAD. I work in VFX, so I do a lot of 3d rendering and I'm in 3d applications a lot, but I have never had the need to learn or even try architectural rendering or design. I'm pretty good with Linux, and I'm pretty good at figuring out applications, given enough time and a million monkeys, so it's entirely within my ability to install some CAD applications and take them for a spin. However, everything I would do with these applications would be entirely without context, and without comparison. Now, admittedly, when most people evaluate software they could do with a lot **less** comparison, but even so I'd be doing architects a major disservice if I pretended like I was an expert architectural-software-on-Linux consultant and assured them that Linux-based CAD was exactly what they needed.

(To reiterate: I have no experience with CAD, so this is a perfect example; to that end, I am neither recommending or cautioning against CAD on Linux.)

(By contrast, I'm very experienced with tools like GIMP, Inkscape, Scribus, and anything having to do with video. So my recommendations in that area are pretty reliable.)

I've personally seen lists of these "reviews" all over the internet. Sometimes they're even written by occasional users of Linux, but to anyone who uses Linux daily, they almost always read like those book reports you used to do in school when you didn't want to actually read the book, so you just watched the movie instead. That is to say, there's some kernel of truth there, but the emphasis is on the wrong thing, and there are other things that are just completely wrong.

## 9.4 Staying Out of Each Other's Kitchens

This isn't about not wanting someone's sympathy vote, or being sensitive about backhanded compliments, or feeling patronised. It isn't about being possessive, or trying to exclude anyone from trying something new and commenting about it.

My point is that being "supportive" of something vocally and then not following through in actions is at best lazy and at worst hypocrisy. I'll be the first to admit that the opposite can feel almost as bad; companies and software vendors that actually use and support Linux make no mention of it, adding to the perceived void of Linux support.

It would be nice to have the complete package in both scenarios. If you use and support Linux, give it first class treatment, the same as everything else. Stop defaulting to one platform and treating others as after-thoughts, because they aren't! take credit for your work, your support, and your dedication to open technology. On the other hand, if you're uninformed about Linux and only want to mention it because you know that it exists and want to acknowledge it, then qualify your statements so that people understand that you are not reporting on research, but on assumptions and second-hand information.

This is nothing more than I'd expect on _any_ topic, tech or otherwise.

The benefit to everyone is that advocates become users instead of observers, and their feedback, both positive and negative, becomes far more valuable. It's easy to critique things, especially when you don't actually use it. So get to know the thing first, and then instead of critiquing it, help make it better. But don't wait around for it to get "good enough" for you to actually use, because as long as you refuse to use it, that day will never arrive.

[EOF]

Made on Free Software.

# 10 Virtualenv

If you do lots of work in Python, you'll eventually hear about `virtualenv`, the Python system to create and maintain virtual environments so that your projects stay superficially separated from one another. Why bother? well, there are several reasons. Two good ones are:

  * it helps you avoid blissfully developing an application on top of a stack and then forgetting to tell users what's in the stack because "the stack" is just _whatever I had on my computer at the time..._

  * you want to use pyfoo-1.0 in one project and pyfoo-2.0 in another

Is it worth the trouble? Frankly, if you're not sitting around thinking "oh my gosh, I need to learn virtualenv!" then you may not need to learn virtualenv. It's a handy tool, but it's definitely an intermediate-to-advanced level tool, so don't sweat it if you have to put off learning it until later.

There are several tutorials on how to set up and use `virtualenv` online, but most of them are confusing or introduce all kinds of frontends and wrappers. I prefer to learn and teach the baseline tools _first_ , and then explore layers of abstraction later. So here are the basics of getting `virtualenv` installed, followed by a quickstart on how to use it.

## 10.1 Install Pip

The `pip` system is sort of the `apt` or `ports` of Python. If you are on unix (and in this article, I assume you are, so if you are not, you'll have to translate a little), then Python probably comes pre-installed and is managed by the OS. Pip can probably be installed by your package manager, too, so a command like `apt install python-pip`, or a visit to http://slackbuilds.org if you're on Slackware, or whatever, will get you `pip`.

If you're on a unix system being managed by someone else (as I am, at work), then you may not have access to system-level tools. The good news is that you can install `pip` locally, to you user directory, without admin privileges:

    $ wget https://bootstrap.pypa.io/get-pip.py
    $ python get-pip.py --user

Pretty easy.

## 10.2 Install virtualenv

Now that you have `pip` installed, use it to install `virtualenv`. Once again, you can do this as root or you can do it just for your local home directory:

    $ echo "Systemwide:"
    Systemwide:
    $ sudo pip install virtualenv
    $ echo "Local install:"
    Local install:
    $ pip install --user virtualenv

Now `virtualenv` is installed.

## 10.3 Using Virtualenv

There's nothing mysterious or magickal about `virtualenv`. You can think of it as a kind of superficial `chroot`; if you don't know what that means, then you just think of it as _you_ lying to your computer. When you launch Python normally, your computer asks (silently, but it still asks) where it should go to find Python modules; you tell it where modules are located. I can prove this to you:

    $ python
    >>> import sys
    >>> sys.path
    ['', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu',
    '/usr/lib/python3.4/lib-dynload',
    '/usr/local/lib/python3.4/dist-packages',
    '/usr/lib/python3/dist-packages']

You see how Python just sorta inherently knows where to look for modules.

Now let's use `virtualenv` to lie to Python about what our system looks like.

First, create a virtual environment:

    $ virtualenv my_fake_env

If `virtualenv` cannot be found, then you either have not installed it or you have installed it locally and have not added the install location to your path. In that case, you can execute the command directly:

    $ ~/.local/bin/virtualenv

But you should probably, for the future, add `$HOME/.local/bin` to your PATH.

Wait patiently for `pip` to install a reasonable base environment.

Once you get a prompt back, take a look and you'll see that your mini environment exists:

    $ ls | grep env
    my_fake_env
    $ ls -1 my_fake_env
    bin
    lib
    pip-selfcheck.json

Next, activate the environment. There are a few ways to do this, depending on what shell you're working in. For me, at work, I use `tcsh`, but at home I use `bash`. Your choices live in your newly-created environment:

    $ ls -1 my_fake_env/bin/
    activate
    activate.csh
    activate.fish
    (and so on...)

So if I'm running in BASH, it's simply:

    $ echo $SHELL
    /bin/bash
    $ source my_fake_env/bin/activate

If I'm at work on TCSH:

    $ echo $SHELL
    /bin/tcsh
    $ source my_fake_env/bin/activate.csh

And so on. Not rocket science.

One your environment has been activated, your shell prompt changes:

    [my_fake_env]$

Now you can work on your Python project as usual. Remember how I said `virtualenv` is basically just lying to your computer? Well, let's see the results of that:

    [my_fake_env]$ python
    >>> import sys
    >>> sys.path
    ['', '/home/klaatu/my_fake_env/lib/python3.4',
    '/home/klaatu/my_fake_env/lib/python3.4/plat-x86_64-linux-gnu',
    (and so on...)

You see that all paths within this virtual environment are based in the my_fake_env directory, almost as if the rest of the computer doesn't exist. This means that you can install pyfoo-1.0 in my_fake_env without affecting my_other_env. Pretty neat!

To get out of your environment, just use `deactivate`:

    [my_fake_env]$ deactivate
    $

## 10.4 Fancy Setups

Are there more elaborate ways to use `virtualenv`? Yes, there are. There are ways you can tie it into larger development practises, you can use a wrapper, you can do all kinds of things with it.

That's not what this article is about.

You know how to get `virtualenv` and how to use it. So start using it, get used to it, fall in love with it, and then go out and investigate cool add-on ideas.

Happy hacking.

[EOF]

Made on Free Software.

# 11 Motherly Advice Against Using Apple

There are too often only two types of opinions in this world: the fanatically favourable and the fanatically farcical. The problem with both of those is that they specialise in the extremes. Something is either God's gift to the known universe, or it is the most evil thing in the world and is responsible for war, earthquakes, and traffic accidents.

Well, make no mistake about it: I'm against Apple. This list is a list of reasons _not_ to use Apple. It's not a list pretending to weigh benefits to liabilities. It's just the liabilities.

Then again, that's all it is. It's just a list of the reasons I find Apple to be more trouble than it's worth. It's like when your mother told you to watch out for them big-city women (or men). She didn't tell you why, she didn't fill your head with horror stories, she just sat down and warned you.

And what did you do? You ignored her. You threw out her advice and dated all the wrong people, you had your fun, and then, sure as the day is long, you got your heart broken.

Your mother told you so.

That's what this list is. Not literally, because I'm nobody's mother, but it's the most motherly advice I could summon on why you, young computerist, should avoid Apple.

## 11.1 Developer

 _If you're looking at programming as a hobby or career, these are particularly uninviting aspects to Apple's glossy developer's pearly (or brushed metal? whatever it is these days) gates._

###  11.1.1 The SDK

Computers are programmable. They just are; that's what makes them computers.

You don't need a special Software Developer Kit to program a computer; a computer is programmable at its very core. If a company offers you an SDK, it might be a fancy way of offering you the development libraries you could get anywhere else, or it may be a form of gate-keeping; make sure none of the "normal" users are bothered by all the technical mumbo-jumbo, but then charge extra for the privilege to write code.

In Apple's case, it's mostly been a form of gate-keeping. Historically, their SDK did actually cost extra money. If you wanted to build stuff on the Apple platform, you had to pay Apple. Yes, you had to pay to do work that would ultimately benefit at least yourself, possibly other users, and ultimately, in a way, Apple (by way of supporting its "ecosystem").

This started to give way to a free SDK (because Apple started basing their own code on open source, and the very technical developers were cobbling together tools to develop on Apple for $0 external of Apple's offerings).

But the gatekeeper remains, and for super secret special developer access, you have to buy special membership. This entitles you to special pre-releases, beta code, and so on. It makes sense for Apple; they don't want every comp sci student building early releases of the OS and releasing it online and giving it a bad reputation. However, the flip side of this is that unless you have a few thousand dollars to maintain your club membership each year, then as a developer you are playing catch-up instead of developing on the cutting edge. That means that when the next version of the OS is released, your application could possibly break for all your users until you get a copy of it yourself, can investigate the changes, identify where the breakage is happening, and code around it. Not easy, not fun, and it reflects very poorly on your application's reliability.

 **Verdict:** Not worth it. Instead of chasing the cutting edge, use open source tools so you can _be_ the cutting edge.

### 11.1.2 An SDK the Size of an OS

The Apple SDK, an assortment of open source and closed source development tools, is huge. It always has been, and I guess in a sense, the more the better, right?

Well, maybe not.

After all, if you're a dumb high school kid and all you want to do is compile a cool new 2MB-large GameBoy emulator so you can impress your friends, then your compiler download shouldn't be 5120 _times_ larger than the thing you're trying to compile. It can be 10x as large, or 100 times as large. But not 5000x as large.

 This is too much.

OK, so modularity is nice.

But there's something more to it, beyond just inconvenience. It's the impression that in order to program your environment, you need a whole OS on top of your OS; a whole new set of tools, new libraries and new packages to open and learn. You're not downloading a development kit, you're downloading a way of thinking.

Consider this; you can start writing applications with nothing more than GCC and a good text editor, all in less than 1 GB (yes, that small even on Mac).

 **Verdict:** Not worth it. Use open source dev tools that remain autonomous and portable and don't require high speed internet to acquire.

### 11.1.3 What They Don't Tell You About Cocoa

(And what they don't tell you they don't tell you about Cocoa.)

People get really excited about the _look_ of the Mac OS interface.

From a programming perspective, though, the Mac interface is a little like one of those plastic bins you get to store Christmas decorations in. You know the box I mean; you put all the stuff into it, then you work it, Tetris-like, into the stack of storage in the hall closet, and it becomes an autonomous object. It is The Christmas Box. You can see into it, so you know you have a Christmas ornament, but you cannot access that ornament without removing the bin from its stack, opening it up, rummaging around, and so on.

The Mac interface, technically named `Cocoa`, is like that because the Apple company builds it in private, and then they put it into the stack, but they lay it out very cleanly; programmers can look at an object in the container, and may even be able to communicate with it in some way, but it's always inside that container, and there's no direct access. That means that if I want to do something with an object in the container, I might have to jump through several programming hoops to do it, even if on some other platform it would take a line of code. That's if you're lucky; in some cases, it might be that an object you need to use is just buried too deep in the container and has not been made available to you.

To be clear, my warning about Cocoa isn't _just_ that it's closed source, it's that it's a contained layer _and_ closed source.

Linux's X server is by comparison not a container at all; you can trigger events on your desktop with terminal commands as easily (or easier) than with mouse clicks. It's insanely powerful, but you could argue that it has some drawbacks (or not). It being open source is a huge benefit. For something in between those two extremes, look at Android; it's implemented a similar model to Mac OS, but instead of using closed source, they use the open source Java toolkit. Suddenly the "contained" graphical layer is a lot less threatening because you can make calls to anything and everything within it, and it works just as you'd expect.

 **Verdict:** Not worth it; introduces a partial API where no API should be necessary. I'll take an abstraction layer, but I want full access to the code for when the abstraction isn't enough.

## 11.2 User

 _If you're looking at Mac OS as "just" a user, these are reasons to avoid it._

### 11.2.1 Consumer Culture

For a long time, Apple marketing positioned Mac OS as the alternative computing platform, as opposed to Windows. They glossed over the price tag, which is sort of fair (more on that later, though), since computers do cost money, but they strongly suggest (or in some cases, blatantly assert) that Apple users are charming young upstarts with new, bold ideas, bucking oppressive conventions.

The problem is that the "rebelliousness" embodied by Apple is entirely purchased. You buy your way into Apple usage; you can't be a "proper" individualist by digging a computer out of the rubbish and installing a free and open source OS on it. That's not individualism, that's not "revolutionary". No, you have to pay for the Independent and Different Thinker upgrade.

In fact, all accomplishments in the Apple universe are based on how thoroughly you've bought into Apple. The Apple website used to (maybe still does?) profile artists who do things _using Apple products_. Again, fair enough, since that's the point of the website, but it creates a sense that we're all taking part in Apple's world, rather than celebrating what their marketing campaigns claimed: the individual.

Beyond psychology, though, there's this problem Apple has related to how it treats its developer community that actually makes it considerably difficult for average computerists to program serious applications for Mac. Apple, in its frenzy to keep everything secret, doesn't do a great job in communicating with the cottage industries cowering in its shadow, so when a new release happens, third party applications that you may have come to rely upon sometimes break badly. Sure, that can happen any where, but too often have Mac developers closed up shop due to the cost of keeping up with Apple's almost subversive rate of surprise changes.

And it's not just third party developers. Apple does it to its own customers, directly. I grew up with Apple before I switched to Linux, and I can't even count how many flagship Apple products have been suddenly discontinued. And when they discontinue something, they discontinue it. I have spent months of my life rummaging through files, trying to make sure everything has been converted to the new format before upgrading my computer. And I've spent years lamenting some of the files that slipped past me, and as a result could not be opened by me. Yes, Apple ate my data, not through one of those "Act of God" quirks, like a failed hard drive, but by betraying my trust in the technology they themselves sold me.

It's as if Apple is challenging the world to just try to use their products, taunting everyone with applications and file formats that, just as soon as you get used to them, will be deprecated and locked in vaults.

To what end? why would Apple do that?

Because they can.

No matter how they abuse their customers, the bulk of them keep buying whatever new Thing comes down the pipe. And if you know that, why wouldn't you keep switching things around? Deprecate the old and force people to buy new; it's the best license enforcement there is.

You can dismiss all of this by shrugging and accepting that Apple Incorporated is, after all, a corporation. To survive, it has to grow its paying customer base. I can't and don't really argue with that. But playing on the self-worth of customers crosses the line between advertising and snake oil.

 **Verdict:** Not worth it. Lying to customers about how a product will free their minds (and other hippy fairy tales) is disingenuous and dishonest, and making computer usage a moving target to ensure purchases is disrespectful at best.

### 11.2.2 All Your Files Are Belong To Us

Apple loves to talk about how it uses open source. Debates about how much they use it and how much they actually give back are pretty easy to find online, but probably the best written article on the subject is Was Apple the First Major Open Source Company? by Steven J. Vaughan-Nichols.

Despite how you feel about Apple's attitude regarding open source or their refusal to play well with open source contributors, to a user all that really matters is whether or not their data is safe: if a user puts their data on a thumbdrive and takes it to work or school, can the data then be accessed by whatever OS they end up on? If a user gets a new computer and can't afford a Mac, can the stuff they've worked so hard at for the past five years be converted to something usable on the new machine? Should the application that created the data become deprecated, will the user still be able to access the data?

All too often, the answer to this is negative. Apple does use _some_ open formats, but since so many of its applications are closed source, so are many of the file formats that do not have an immediate, obvious need to inter-operate with other systems or devices. What I'm saying is that Apple uses standards when it would be blatantly stupid not to, and even then, they tend to favour formats that are $0 to use but still maintain a list of restrictions.

I used to tell everyone to use the Universal Disk Format, but of course Apple's done away with support for even that (at least, for now; who can say? maybe they'll restore that feature by the time you read this, or maybe they won't).

I've been paid a lot of money to de-tangle various projects from Apple file formats. It's grueling, manual labour that I only did for the money, and it happened without any help from Apple or their supposed dedication to openness.

 **Verdict:** Make no mistake: Apple doesn't care about your entitlement to access the data you yourself produce with their tools. They just care that you keep using their products.

### 11.2.3 Bloat OS

I can download a complete OS, with not just one but _two_ full-featured desktop environments (think the **Finder** , but generally more robust, and more than one to choose from), a whole set of applications (text editors, photo editor, web browser, email client, image viewer, fonts, media players, and so on), **plus** development tools, some silly games, more file sharing protocol integration than you can shake a stick at, and a heck of a lot more. It fits on a 4.7G DVD, and unpacks to about 8GB on drive.

By contrast, the latest (as of this writing) Mac OS disc image is 6GB of download, with a bare minimum of 8GB drive space required. Not too bad, right? Hang on, Mac OS hasn't got even a third of the actual content; it has one text editor, one desktop, no serious photo editor, one media player, and no options for development. Fact is, you're likely to replace as many of the default tools with better alternatives, and if you want dev tools then you need a separate 10G download.

So basically, twice as much data with a third of the content.

 **Verdict:** Not worth it. Apple will bowl you over with fancy looking props that, when put to the test, flop.

### 11.2.4 HFS

One of my very favourite topics: the HFS+ filesystem. I have a long history with this filesystem, and have written at length about my experiences with it.

Filesystems are difficult to test, because there are several variables involved, so I can't comfortably say that HFS+ is a poor quality filesystem. I will say that it is the filesystem that I trust the least, but I can't back that up with raw data.

What I can say confidently is that at the very foundation of my concern about how Apple regards the sanctity of its users' data is the system that it uses to save that data. And if you can't even get the source code to the filesystem that a company forces you to commit your data to, what does that say about their attitude?

An analogy would be this: you are sent on an important trip but cannot take any of your possessions with you. So you pack them up and take them to someone for safe keeping. They promise you they'll keep it safe, but flatly refuse to tell you where they are going to keep it. But they swear that any time you want to come round and see something, they'll fetch it for you. When you're ready to re-claim everything, they'll give it back to you, but only to you (you may not send an envoy).

Would that make you feel comfortable? what if this person dies or goes missing while you're away? you may never be able to find your stuff again. What if you have to stay abroad longer than anticipated, and want to have your stuff sent to you? What are they _doing_ with your stuff in the meantime, any way??

That's the HFS filesystem. It stores your data, but you can't read that data from any other computer (unless you go through Mac, for example, via file sharing). If something happens to your Mac and you don't have another Mac nearby to interpret, you won't be able to rescue your data. If something happens to HFS itself, no one but Apple has its source code, so quite possibly you'll never see that data again.

Luckily, resourceful _open source developers_ (of course) have been reverse engineering HFS for years and can mostly get around its blockades _in spite of Apple's refusal to provide data on how to do that, or even permission to do it at all_ , but the fact that this is necessary is not just disconcerting, it's pretty disgusting. This is how Apple feels about its users and their data. Your family photos, your artwork, your schoolwork, your music: Apple promises to keep it safe, just so long as you always ask Apple nicely for it back.

Not only does Apple keep HFS+ from being used by other platforms, but they refuse to let you choose anything but HFS+. They have provided a Microsoft closed filesystem for thumbdrives and similar, but that's all. The system itself still must be HFS+, and that's Apple-only, closed source and exclusive.

 **Verdict:** Not worth it. My data belongs to me, and I should be able to store it as I please, and to transport it seamlessly between devices.

### 11.2.5 Price Tag

I still remember the sinking feeling I got when I had to go buy my very first Mac computer with my own money. I was getting the previous year's model, with an employee discount, from the place where I worked, and I still couldn't actually afford it. Admittedly, I needed a fairly robust one due to being in film school at the time, but it was shockingly expensive. The software that I needed on top of that was actually _more_ expensive than the computer itself.

It was bad enough for me to entertain the idea of buying a Microsoft PC. That doesn't sound that crazy to most people, but for me, especially at the time, it would have been a life-changing event. I didn't do it, but the fact that I was considering it meant that I was truly desperate.

And then there was the time when the mouse broke. I am not kidding when I say that I did not know that I could use a non-Apple mouse, so I went and paid for a new mouse and skipped several meals that week.

This, probably, is why I resent Apple so deeply, the way most people resent Microsoft. It's not the pompous attitude, the slick marketing, the hoards of devoted fans, the aggressive attitude toward its users, the flagrant disregard for user data and application stability, the deliberate inability to inter-operate with other systems; it's the fact that Apple kicks you when you're down. There's just no quarter given; according to Apple, it's Apple or nothing. That's not just by implication, it's the way they design their computers and protocols. You may not choose an alternative, because nothing you're used to on Apple will work on the alternatives. You have to pay, even if it means going into debt.

As it turns out, Apple overcharges for what they sell. It's not really something you can "prove" because obviously when you buy an Apple product, you are indeed paying for a metal case or a specific design choice. But in terms of spec-for-spec, there's no arguing. I can easily build a Linux system that outperforms (cycle-for-cycle) the latest top-of-the-line Mac for a third or half the price (depending on how we equate things).

And you know what? if Apple came out with a budget computer today, I still wouldn't buy it or advise others to do so, even ignoring every other reason to avoid them. Why not? because it's been too long. They've over-charged people, driven people to buy into their illusion on the pretense that they've got not choice because it's such a quality product. This is a business I do not want to do business with.

 **Verdict:** Nowhere near worth it. There's a reason studios and production houses don't buy Apple hardware to run their Linux workstations; it's just overpriced.

## 11.3 Ideologist

Apple markets itself as a "green" (silly term) computer company, meaning they are ecologically responsible. They go to great length to make this known on their site, and it usually has that "responsible consumer" boost to it; you feel better about paying all the money you're spending because, while you may be over-paying, at least you're paying _the right_ company.

But there's more to this than meets the eye, I think. Apple is being a little dishonest in their claims, even though they may be reporting the actual production properly. You see, you cannot honestly _imply_ that you're the most ecologically friendly computer company out there because you don't use mercury in your computers if your entire business model hinges on forcing customers to purchase new devices every year or two; yes you use no mercury, but you're campaigning for people to throw out two phones and a laptop every three years.

What if Apple slowed production? what if they let their products last for as long as they're made to last?

It can be done. Debian 8 Linux in 2016 supported iBooks from 2004 (possibly earlier, I haven't tried lately). That's 12 years ago; you can have a modern OS on a 12 year old computer. Yes, it's slow, and if you're producing HD images then you probably won't use a 2004 computer that maxes out at 512 MB RAM (or whatever it is). But if you're writing blog posts or just learning programming or networking, you may well use a 2004 computer, quite happily and securely.

This isn't just Apple at fault, here; the computer and tech industry at large is extremely destructive and wasteful. But Apple is the loudest about how different they are from this trend. Apple is the one claiming to be progressive.

So it's disappointing that it turns out to be a lie.

Or is it, really, disappointing?

 **Verdict:** Not worth it. Apple encourages waste and disposal possibly more so than any other major computer manufacturer, and blatantly lies about how "eco friendly" they are.

## 11.4 The Alternative

Apple is a company that has China make it some very nice computer parts, including custom bodies. Some people find its OS attractive.

The truth is, though, that Apple is dangerous to you and your data. Avoid it.

I'm not going to say there's a "similar" experience elsewhere. There's not; Apple is very distinct. And when you use it, it feels like you're part of a special club, because you didn't grow up with it, or there aren't that many other Apples in the café, or you've seen them at tech conferences, or they just feel cool. But in the end, the cost is greater than you realise.

Discover liberated computing with Linux. It'll get you using computers like a pro. It'll get you creating.

And most importantly, it'll get you thinking.

[EOF]

Made on Free Software.

# 12 Joy of Docbook

When I discovered the wonderful world of editing text without a word processor, I felt like I'd "come home", as they say, in the spiritual and emotional sense. The earliest writing experiences I had, on a computer at least, had been on a non-GUI text editor, using markup to indicate style and formatting. When I took "computer literacy" courses in school, I used word processors, and then continued to use them later, but I think deep down I never felt that word processing really made much sense, especially since opening a document in a different word processor always seemed to require re-formatting, and word processor upgrades often broke things too. Furthermore, the text always seemed married to the format; a word processors seem to assume that if you wrote and designed a document for A4 paper, then you will never ever want to output that document to any other paper size or media. Ever.

Eventually, I discovered `vim`, and later `emacs`, and I have never looked back. I'd successfully liberated (in every sense of the word) my text from inherent formatting, and that alone cut my workload down. Never again would I have to manually re-format content, because I was writing in plain text.

## 12.1 Markup and Markdown

Turns out plain text will only get you so far in life. Well, that's not true; plain text is the foundation upon which the very civilised computing world rests, but saying to use "plain text" is really only telling part of the story. Plain text is a lot more work if there is no consistent structure to it. Early in the lifespan of ebooks, back when there were no ebooks, people would make electronic books and post them as plain text online, and very often each person would have their own peculiar style. That's great, until you re-visit your collection years later to convert them into a modern format like EPUB and find that no two books use the same convention to mark chapter breaks, and some break pages to emulate the size of the book it has transcribed while others had no breaks, and so on.

So the computer experts came up with markup languages.

### 12.1.1 XML and Docbook

When I learned about Docbook, I fell in love with it instantly. An HTML-like markup language that produced text that you could run through any number of processors and end up with all your original content in nearly any format you could ever want: HTML, plain text, ODT, EPUB, you name it.

When you get involved with Docbook, you enter into a serious relationship. Docbook is a frequently-marked language; there's a tag for everything. Paragraph breaks, links, images, not two but _four_ list types, sections, simple sections, chapters, books, metadata, and far too much to even mention. It can be a little overwhelming.

And if you get the markup wrong, Docbook refuses to process. This isn't HTML, this is strongly-typed XML. I'm not just talking about mistyping a <para> tag, which will cause Docbook to fail, or even forgetting to close it, which will also cause Docbook to fail, but getting the parenting of elements (putting a <simplelist> outside of a <para>, for instance) will break Docbook. It is utterly unforgiving.

And the toolchain for processing Docbook documents can be pretty heavy. There are several choices, and none are particularly light (although `xmlto` seems pretty reasonable to me). But just because it's "just plain text" does not mean it's a walk in the park, especially if you're authoring a 300 page book with images that need re-sizing and footnotes that need resolving, and styles that need applying, and so on.

So why am I still in love with this Docbook thing, especially with the rise of Markdown and other up-and-comers? Well, before we investigate that, let's look at the alternatives.

### 12.1.2 The Markdown Agenda

Docbook has been around for a very long time, and longer if you count its earliest implementations prior to even being known as "docbook". Nobody argues that it isn't complex.

Presumably, in an effort to provide a simpler way to utilise the universality of plain text in conjunction with the power of consistently parse-able formatting, the idea of _markdown_ got popular. And we're all very happy that it did.

There are a few varieties of "markdown" languages. There's _markdown_ itself, with its admirable goal of just bringing consistency to plain text. As long as you keep it basic, markdown proper is nearly intuitive. It's almost difficult to not write it. It mostly looks the way you would write a plain text document anyway.

And yet, it doesn't. Bizarrely, there are things that markdown dictates that just doesn't come naturally at all, by which I mean no one in any plain text doc every did it that way. The headings, for instance, are preceded by hashes, and links to images are preceded by an exclamation mark, and code blocks are simply indented more like a blockquote than a code block.

There are other "markdown" languages, though, which improve upon the original. Github, in fact, has its own modified version of markdown which seeks to augment the syntax, but you could argue that it's geared strongly toward Github, which isn't exactly a super-common target so much as it is, well, version control hosting.

I recently learnt from my friend SoundChaser that Pandoc, God's _actual_ gift to the world of text conversion, has its own markdown implementation, as well, called Pandoc's Markdown and I don't exaggerate even one bit when I say that the initial inventors of Markdown should acquiesce immediately by removing the markdown "spec" from the Internet and 301'ing the pages to Pandoc. Everything that Markdown lacks, Pandoc Markdown accounts for, whether it's the inability to have code blocks after a list without making the code block part of the list, or having internal links, or something more obscure, Pandoc Markdown provides not only the spec but also an amazing parser.

If you're going to use markdown, use Pandoc's Markdown.

Similar to Pandoc's Markdown is Restructured Text (rST), used in several Python projects, including the all-in-one doc package, Sphinx, easily the best markdown-based replacement for Docbook available.

Restructured Text uses far more intuitive conventions in some places (underlines to mark chapters and sections, for instance), simple markup for code blocks, but then deviates into obscurity for URLs and xref- or xinclude- style cross-references, string substitution, index terms, and things like that. When you have to break out a reference guide to know how to write your "natural" markdown text, it stops being markdown; it is now certifiably _markup_.

Complaints aside, markdown systems are really neat, and I do appreciate them and I do use them. They enforce a standard format on otherwise plain text so that conversion is possible, and yet they maintain the readability of plain old plain text.

But that's the funny thing about markdown. The reason markdown is cool is that markdown imposes order on text for better parsing. Like, ya know, Docbook. Only, less explicitly and with a lot less clarity.

## 12.2 When Lenience is Strict

Maybe we're not yet back to Docbook, but there's a definite problem with markdown. It's a lenient format, since it pretty much defaults to outputting as plain text anything not marked down to something specially styled. That means that as long as you type a plain text document, even if it has a bunch of incorrectly-indented code examples and unmarked section headings, your markdown processor will output what you typed as unstyled run-of-the-mill text. It defaults to success, even in the case of failure. And that's darned user-friendly.

But the danger to markdown (rST included) is in its permissiveness. You're free to write in structure, or outside of it. When I write it down that way, it seems like a choice that you consciously make. But what happens more often is that you _believe_ you are writing in proper structure, but have accidentally deviated from it. And guess what happens after you render it to your fancy HTML page or EPUB or ODT document? All of that pretty formatting you thought you were doing is gone or completely different than what you intended. All because you forgot to indent properly, or because you forgot to add a line break, or whatever.

And what about finding and fixing those problems? You more or less have some idea of where the breakage is happening: it's near the part where all your formatting gets messed up. But what exactly is the problem? More often than not, "fixing" a markdown problem is a series of blindly adding whitespace in hopes of appeasing the indentation and line break police. List not behaving the way you want? try adding some spaces. Didn't fix it? remove that line break. No? OK what if we just get rid of the code sample in the middle of the list. Nobody needs a code sample anyway, right? Still no? OK, let's just manually code it in HTML.

And yes, 7 times out of 10, I think that's been the fix for most markdown issues: use HTML instead. You can do this because markdown will happily process non-escaped HTML. Yeah, so markdown's solution for complex formatting is, essentially, to fall back on HTML, an XML-based markup language.

You see the irony: in markdown's attempt to eliminate the need for "complex" markup syntax, it confirms the need for markup syntax.

Another problem with markup, especially when used for large documents, is that it fails silently. Believe me, that's nice at the time of conversion, because, darnit, it just works. That is, until you send the book off to the printer, or you upload it to your site, or push it to your distributor, and then you forget about it until a month later when all the bug reports start coming in. Why are these weird characters at the start of each chapter title? why are there asterisks around this one word? why did all the lines in this one code example run together? and so on. Yes, errors are frustrating but it's better to know about them _before_ shipping your product.

## 12.3 Docbook

The Docbook method is nowhere near as gentle as markdown.

It requires you to know lots of markup keywords, and, worse still, the _proper order_ those keywords are allowed to appear when you write. It argues with you if you deviate from its syntax.

But it tells you where you went wrong. Sure, sometimes it points you to line 432 when the beginning of the error actually starts on line 4, but if you're using a good text editor with folding or syntax error detection, you'll find that. And in a pinch, you can just keep scrolling up, counting "open/close" tags until you find the tag you forgot to close.

The important thing is, Docbook won't let you convert a broken document, meaning that your audience won't be finding your formatting mistakes. You will.

Docbook is also a complete solution. You won't find yourself falling back on HTML when you can't get lists to cooperate, because Docbook specialises in structure. You can style it anyway you like; the _structure_ of the document will be perfect no matter what.

I guess what Docbook really delivers, at the end of the day, is _clarity_. I know it sounds deluded to look at XML and say "there, look at how clear it is!" but the truth is, that's exactly what XML provides. Where does a paragraph _really_ end? look at the closing tag. What's part of a code block and what's part of an admonishment? look at the tags. Is it one asterisk for bold or two? don't use asterisks, use tags.

The list goes on and on.

But don't get me wrong. I use markdown in real life, and I sincerely appreciate it (I don't mean that in a patronizing way; I am an actual markdown and rST user). This article was written in markdown and converted to HTML with a shell script and Aaron Swartz's markdown converter. There's definitely a place for markdown in this world, because there are absolutely times where Docbook could be seen as overkill (or it might not be; depends on the user and the workflow and personal preference). But if we're recognising limitations in one then we should recognise limitations in the other. What those limitations are depends on your use-case and your tolerance levels for inefficiencies in each.

For me, Docbook is a great solution for authoring content. It's a comfortable system that I enjoy.

Use Docbook if you want your document to be clearly and cleanly structured, and if you want that structure enforced. And if you use something else, when you sit back in bewilderment as your document is rendered incorrectly, then think of me when you say under your breath "shoulda used docbook!"

[EOF]

Made on Free Software.

# 13 Gamepad on Linux

Before Humble Bundle and Valve brought "serious" gaming to Linux, my gaming experience was limited to basically _nothing_ for the bulk of my life, followed by two years of finally buckling and getting a Sony Playstation because my girlfriend at the time wanted me to introduce me to video games. Consequently, my knowledge of gamepads was pretty narrow. I knew that the Playstation and the Xbox had different kinds of controllers but I figured they were just like mice or keyboards; USB input devices that were probably basically universal.

I was incorrect.

I still have limited experience with gamepads, so this document is only my experience in getting three gamepads working on my Linux machine.

## 13.1 Playstation vs. Xbox

If your frame of reference for game controllers are the two big console vendors, then in your world (like mine) there are two "kinds" of controllers: those distributed with Playstation or the third party controllers sold for the Playstation, and those that ship with or mimic the Xbox controllers. They're basically the same, except in the ways that they are not, and those are usually the things that annoy you. More on that part in a moment.

The _actual_ difference between the two are the drivers. You see, game controllers are _not_ just mice or keyboards, they're finely-tuned hardware that allows the gamer to interface with their gameworld. They're also pretty vendor-specific, so neither Playstation nor Xbox really "want" to follow any kind of standard but their own. What that means for you is that you are, as is so often the case in closed-source, screwed. You can't just plug a Playstation controller into a computer and have it work, or even an Xbox one; in both cases, you need specific drivers for them to "just work".

On Linux, the easy no-brainer solution is to use an Xbox controller, because between these two choices, it just so happens that the Xbox controller, for whatever reason, just happens to have better support. It has its own driver, in fact.

Bottom line: you may love your Playstation controller, but on a Linux PC, just go with an Xbox-like controller. You'll get used to it, I promise. Also, the market for third party controllers is actually really good, so you'll be able to find a controller that uses the Xbox driver but looks and feels more like a Playstation controller if you look hard enough.

In fact, you don't even have to look that hard. The Logitech F310 is a dead ringer for a PS3 controller, but talks Xbox drivers on the backend. If you prefer the Xbox feel, there are plenty of those out there, too.

## 13.2 Xpad and Xboxdrv

So now that you've settled on an Xbox-compliant gamepad, let's talk drivers.

I was especially impressed when I bought my first generic controller on TradeMe, plugged it into my Slackware machine, and found that it worked without further configuration.

I was again incorrect.

I mean, I was correct: it did work, but it's working via the `xpad` driver. There's nothing "wrong" with the xpad driver, but most big _computer_ games were written for mouse and keyboard first, and Xbox controller second. This means that if you're using the Xbox driver, you can plug in your controller and the game will recognise the input as Xbox signals, and respond accordingly.

Bottom line: you want to forego the xpad driver and use the xboxdrv Xbox driver.

## 13.3 Configuration

Honestly, most Linux distributions at this point will get your Xbox-compliant gamepad working _for you_. It's a no-brainer. Install the xbox driver, plug in your controller, and start playing your games. If you're running SteamOS (more on that later), you don't even have to install the driver.

I use Slackware, which doesn't ship with `xboxdrv`, so I had to install and configure that more or less manually, but it's really really easy.

  1. First, install the driver from its website.

  2. Along with the driver, you should have downloaded (if not installed) a bunch of example configuration files. Copy the default configuration to your `/etc` directory as `xboxdrv.conf`. If anything listed in my code sample is commented in the example config file, uncomment those:

        # cat /etc/xboxdrv.conf
    [xboxdrv]
    mimic-xpad = true
    [xboxdrv-daemon]
    pid-file = /var/run/xboxdrv.pid
    [xboxdrv]
    ui-clear = true
    [ui-axismap]
    X1 = ABS_X
    Y1 = ABS_Y
    X2 = ABS_RX
    Y2 = ABS_RY
    LT = ABS_BRAKE
    RT = ABS_GAS
    DPAD_X = ABS_HAT0X
    DPAD_Y = ABS_HAT0Y
    [ui-buttonmap]
    start  = BTN_START
    guide  = BTN_MODE
    back   = BTN_SELECT
    A = BTN_A
    B = BTN_B
    X = BTN_X
    Y = BTN_Y
    LB = BTN_TL
    RB = BTN_TR
    TL = BTN_THUMBL
    TR = BTN_THUMBR
    # EOF #

  3. Create a startup file so that `xboxdrv` runs automatically as a daemon in the background. This makes it so that when you plug your Xbox controller into your computer, it gets recognised and can actually be used by applications. Something simple:

        #!/bin/sh
    xboxdrv_start() {
    if [ -x /usr/bin/xboxdrv ]; then
        echo "Starting Xboxdrv in daemon mode." >> /tmp/xbox.log
         xboxdrv --daemon --config /etc/xboxdrv.conf \
        --mimic-xpad --detach-kernel-driver --silent &
    fi }

    xboxdrv_stop() {
    killall xboxdrv }

    xboxdrv_restart() {
    xboxdrv_stop
    sleep 4
    xboxdrv_start }

    case "$1" in
        'start')
    xboxdrv_start ;;
        'stop')
    xboxdrv_stop ;;
        'restart')
    xboxdrv_restart ;;
    *) # Default is "start"
    xboxdrv_start
    esac

If you're using systemd, it's just a matter of launching the daemon at boot; systemd handles the rest. Of course, if you're using systemd then you're probably using a distro that has already configured all of this for you. Still, in the interest of completeness, something like this should do:

        [Unit]
    Description=Xbox Driver

    [Service]
    Type=idle
    ExecStart=xboxdrv --daemon --config /etc/xboxdrv.conf --mimic-xpad

    [Install]
    WantedBy=multi-user.target

  4. Make the `rc` startup script executable so that it gets started upon boot.

        $ su -c 'chmod +x /etc/rc.d/rc.xboxdrv'

  5. Remove the `xpad` driver:

        $ su -c 'modprobe -r xpad'

  6. Optionally blacklist it so it never loads:

        $ su -c 'echo "blacklist xpad" > \
    /etc/modprobe.d/BLACKLIST-xpad.conf'

  7. You could manually start the `xboxdrv` driver with `su -c '/etc/rc.d/rc.xboxdrv start'` but you should probably just reboot to make sure that it gets started automatically for you. After a reboot, verify it with:

        $ lsmod | grep xbox

  8. Plug in your controller and start playing games.

## 13.4 Intercepting Signals

The weird thing about being a PC gamer using a console controller is that computer games are generally not written for controllers. Real PC gamers (I'm not a real PC gamer, so I'm excluding myself from this) use the keyboard and mouse. Some games do ship with a controller scheme, and it's getting more and more common as SteamOS permeates the PC market. To find out for sure, just go into the **Options** menu of your game and have a look at what controls are available.

In the event that a game does not provide controller support, you can brute force controller support with some middleware that listens for Xbox control signals and translate them into keyboard and mouse events, which get forwarded on to the game.

There have been many applications of this sort throughout history, so the names and availability of the app might change, but they all basically do the same thing. A really nice current one is called AntiMicro and it's available from github.com/AntiMicro. It's got a nice GUI, presets from other users, and it works quite well. For something simpler, you might also try rejoystick but I use and can vouch for AntiMicro.

## 13.5 Steam Controller

Xbox-compliant controllers are a breeze, and in terms of PC gaming, it's what I started off with. About two years after Steam came to Linux, Valve released a wireless Steam controller, and I highly recommend having a go. It's a nice controller and is mostly plug-and-play on Linux. I say "mostly" because obviously if you're a hacker and modder the way I am, there's no telling what state your computer's in, but realistically speaking, if you've got a machine (or a partition) that's dedicated to gaming on Steam (presumably you'd be running Debian or SteamOS proper in that case) then the controller definitely is plug-and-play, and has several nice features (cloud syncing control schemes, for instance) in addition to a really slick hardware design.

[EOF]

Made on Free Software.

# 14 Points of No Return

At the time of this writing, I've been a Linux user for a long time. How does one define "a long time"? Good question. Let's ponder.

I've said it before, but it truly does seem like only yesterday that Linux seemed like a big, mysterious unfathomable puzzle just calling out to me to be solved. And of course, the more you use it and the better you get at, the more you realise that you never "solve" a really good system, you just get comfortable with it. There should, ideally, never be a point at which you can walk away from a computational system feeling like it's all sorted. There should always be a possibility that has not yet been tested, or a combination not yet tried, or a new corollary to formulate.

That's what Linux is; being completely open source, you can look at every line of code and learn how to use it, but once you do, you can hook into it and build your own system. "Solving" open source is the _start_ of open source, not the end.

## 14.1 Reality Check

One thing I have noticed, along the meandering path that is a life of open source, is the gradual fading of closed source (for lack of a better term) "awareness".

This takes many forms:

### 14.1.1 No Authority

The first is the way one sees closed source software as the measure by which everything else is assessed. It's natural. If you grew up using Microsoft Word, then it's pretty natural that you'll use it as the basis of comparison for any other word processor you use. Lucky for me, I was always into "shareware", so I was always downloading new applications to try out, to the point that it's somewhat a specialty of mine (at least, that's what people tell me) that I can pick up a new application, learn it very well, and then teach it to others. So I don't feel like I've had to get over _much_ , but there were definitely paradigms that I was used to, and which I latently insisted were the "right" ways of doing things.

Gradually, though, that measure slips away. You just kind of forget them. I guess it's a little like transitioning from school as a child into university life. You go to class, you call your professors by their last names, you ask for permission to do everything until you finally notice that nobody requires or even wants you to do that. Maybe a professor just tells you outright: you're an adult now, do what you want.

### 14.1.2 Reaching the Point of No Return, And Liking It

The other form this takes is losing touch with what is even going on in the closed source world. Obviously, whether you can do this depends on your daily encounters with closed source operating systems; I've been lucky enough for years now to not have to work on anything but Linux at home and on the job. As a result, closed source software is something that I know exists but pragmatically I have no awareness of it, much less of what is going on in that industry.

For that reason, I often find myself genuinely surprised when I hear about the latest antics of some corporate software (or social network, or service, or whatever). Not only am I surprised by what new inventive way they've come up with to cheat their customers, but I'm surprised even more by the fact that their customers continue, to this day, to pay to be screwed over. Do these people not already have enough obstacles in their life, that they have to pay a company to invent new ones for them?

This leads me to two conclusions.

First, I see now that me finding and using open source was, basically, inevitable. If I hadn't found it and switched years ago, I'd have found it today. I know myself enough to know that I'm just not the sort of person to sit obediently while a company takes my money only to deliver something worse than what I had to begin with, and which in fact abuses my trust.

Secondly, that I have reached a point of no return. I heard about some "new features" (to put it politely) of some of the latest proprietary operating systems this morning, and tried to imagine myself switching away from Linux. Imagine me losing interest in open source, at least enough to accept a closed foundation upon which I run a sludge of proprietary and open source software; pretty much, the way I used to compute before I knew Linux (or "open source", for what it is) existed. The question, after these imagined presumptions were made, is which OS would I end up using?

The answer actually took me my surprise: I was a man without a country. There's is no closed source OS for me to "fall back upon". If I decided to walk away from open source (Linux, BSD, Ilumos, and all the others), there's honestly no OS I could settle on and be happy. Mac would have been the obvious choice, since that's what I grew up with, but I've learnt so much about how closed and aggressively non-standard it is (it's non-standard even when using standards, if you can believe it), I just can't see myself wanting to deal with that. I have no experience with Windows but have been told it is easier to program for, but frankly their security (I don't mean vulnerabilities as much as I mean proven underhandedness) track record, their ideology (crush the competition by any means necessary, even if it means not playing fair or in the best interests of your customers), and arbitrary roadblocks (pay-to-play features) don't quite match up with my ideas of a good time.

I can imagine other scenarios, like going proprietary-Unix or wondering off into a land of obscure EOL stuff (like RISC OS, OS/2, and whatever else), or opting for a research or hobby OS (Plan 9, Kolibri, Haiku).

The point is that somewhere along this adventure, I've crossed a point of no return. I have settled so comfortably into freedom and flexibility that I cannot imagine giving it up. I'm not a Linux user because I like a $0 OS (I actually pay for my install discs via a subscription to Slackware and RHN, but I absolutely insist upon the guarantee of not being blocked-until-payment), not because I like the community (although I do), the geek cred, the desire to go against the grain, or even to reject capitalism-over-humanitarianism. I'm an open source user, in a way, because open source is the only system that delivers on the promise of intelligent technology, and delivers the liberty and autonomy to do what I want.

Why would I ever subject myself to anything less?

Well, I wouldn't. And I'm quite happy to say so.

[EOF]

Made on Free Software.

# 15 Advice Against Using Windows

It's so popular that it's cliché to hate Microsoft. You either settle for it, or you make money supporting it. But nobody likes it.

At least, that's how it seems.

Myself, I don't hate Microsoft any more or less than I hate any other mega Corporation. I find it unfair that they are the de facto computer system for certainly the bulk of personal computers in world, and they are used by governments but impose restrictions on the people paying for the software through taxes. It'll all scummy, murky water that obviously is not right, and yet Microsoft is still around, so I guess everyone is fine with it. Like I said: you settle for it.

I'm one of those unique few who grew up with no exposure to Windows. My family never used Windows, I don't recall it being something I used in school, I managed to avoid it at work (my early jobs were in computer retail, but I was always "the Mac guy", and then I worked in film, and then in Linux). Windows has simply never been something I've had to deal with, for better or for worse, for my entire life.

So aside from my disdain for its business practises (which, again, I don't really see as being any different from any other major US company), I actually don't have _that much_ to say about Microsoft Windows as an OS, because I don't have the deep personal wounds I have from Apple.

With that being said, I do have issues with Windows beyond Microsoft as a company. Some of them are the standard complaints about sanctimonious closed source, but there's a gem or two that I have myself discovered in my brief interactions with it whilst trying to make sure an application is cross platform.

To be clear: this is a list of reasons _not_ to use Windows. It's not a list pretending to weigh benefits to liabilities. It's just the liabilities. I realise that for every complaint about any OS or technology, there's always a hack. You can work around stuff. We all do it every day. The problem is, some things just aren't worth hacking out a way to circumvent a block in the road.

Sometimes it's just better, for so many reasons, to just not take that road in the first place.

Here are some reasons:

## 15.1 Developer

 _If you're developing on or for Windows, the good news is that a lot of stuff supports Windows in a very native-feeling way. The bad news is that Windows suffers from several questionable design decisions._

###  15.1.1 Program Files and Architecture

This is a little bit technical, but it says a lot, so if you're keen, read on. Otherwise, maybe skip to the next one!

Let's very broadly say there are two types of run-able files on computers: there are applications and there are libraries. That's not accurate, but I'm keeping it simple.

Libraries, by design, are not launchable; you don't click a library icon and get a nice pretty application window. Libraries are pieces of a puzzle; they're useless on their own, but they fill in the gaps of an application when needed. When is it needed? whenever we launch an application that needs a specific library (and most need at least one), then a library gets plugged into a library-shaped hole, the application is made complete, and it runs.

When the library is not available, then the application fails to launch, or it crashes at some point, usually with some kind of library error (an `.so`-related error on Linux, a `dll`-related error on Windows).

One caveat:

A 32bit (x32) library only fits into a 32bit application, and a 64bit (x64) library only fits into a 64bit application.

Applications are simpler.

  * An x32 app runs on an x32 _or_ x64 system

  * An x64 app runs on an x64 only

As long as its integers can fit into the available memory space of the host OS and all the libraries it needs are present, an application is healthy. It doesn't actually ever need to know if it's x32 or x64; it just tries to launch when clicked.

And how does a computer know where to find the application's code when we ask the computer to launch something?

Well, the location is stored in an environment variable called PATH (or `%PATH%`, on Windows).

It makes sense to separate x32 libraries from x64 libraries. After all, you might have a x64 version of the standard C library installed on your computer, and have the need to also install the x32 version, because you may want to run one application that requires `libc.x64` and another that requires `libc.x32`. You don't want those two things to conflict, so you might do something like make a separate directory for all your x32 libraries, or maybe give all the x64 libs a unique name, or whatever.

You would want to do this consistently, preferably in such a way that it does not break backward compatibility (existing x32 apps should still be able to find x32 libs without the users or developers doing anything differently).

It would make no sense to differentiate between x32 and x64 applications. As I said, applications don't need to know what they are; all they need to do is grab their libraries and launch. Each application launched gets its own process ID, each one claims some memory, and computing continues as usual.

And yet for some reason, Win64 has been designed to differentiate between x32 and x64 _applications_.

Why would you need to do this? theories abound online, and most of them are obviously misinformed and malformed; the ones that are correct boil down to this: we guess Microsoft decided it would be nice to be able to see how many x32 apps you have on your x64 system. And it might well be nice, for statistical purposes...the problem is, it breaks forward compatibility in order to preserve backward compatibility....if that even makes sense.

On Win32, applications are installed to `Program Files`. Fair enough. Backwards compatibility tells me that in the future, I'll look for apps in `C:\Program Files`.

On Win64, applications are installed to `Program Files`. Seems logical and consistent; as above, I know to look for apps in `C:\Program Files`. Oh, _unless_ the application is x32...in which case we'll just default to `C:\Program Files (x86)`.

The result? An x32 app I install on Win32 goes into one execution path (`Program Files`) but _the same_ x32 app I install on Win64 goes to a _different_ execution path (`Program Files (x86)`).

Same action from the developer, same action from the user, totally different results.

Doesn't seem like a huge deal until you look at it from a programming perspective.

On Linux, if I install an application to some default location (and by default, I mean _let the computer decide_ ), and then add that default location blindly to my PATH (which the computer does for me, but let's assume not), then when I call that app, I get an answer.

Doesn't matter what architecture I'm on; could be x32, could be x64, could be a PowerPC, a Raspberry Pi, whatever. I can blindly throw my installer at Linux and say "install this where ever you keep your binaries, and then launch it". Works every time.

Would you know how to do that if you've never used Linux? No, of course not, but you would spend an hour reading up on it, try it, and it would work as expected, on each architecture you (or, more importantly, your users) try.

On Windows, though, it breaks.

Watch:

    Exec "C:\WinFF-1.5.4-Setup-3.exe"
    DetailPrint "Installing WinFF and ffmpeg..."
    ${EnvVarUpdate} $0 "PATH" "A" "HKLM" "%PROGRAMFILES%\WinFF\"

That works on Win32 because `%PROGRAMFILES%` resolves to `Program Files`.

But that SAME CODE _breaks_ if it's performed on Win64 because the `.exe` gets its path switched to `Program Files (x86)` but `%PROGRAMFILES%` STILL resolves to `Program Files`. In other words, Win64 changes one variable (the implied `INSTDIR` variable) without changing the other. The end result is that on Win64, we install a binary to a place and then add a _different place_ to the PATH, even though on Win32 the same action installs to a place and adds that place to the PATH.

And for what?

There's no reason to differentiate, at least on the filesystem level, between x32 and x64 applications. Applications don't care if they're 32bit or 64bit. Applications care if _libraries_ are one or the other, but an application doesn't care if it or any other application is. I can call an x32 app from the code of an x64 app, or an x64 app from an x64 app, or an x32 app from an x32 app; there is NO combination of application calls that is affected by bit-edness.

If Windows really really want to let the user differentiate between the two, then surely a different icon emblem or a meta directory of all x32 apps would be better. (Although, for the record, I still fail to see the significance, personally.)

But don't break your own system environment variables and internal logic.

## 15.2 User

 _Windows isn't safe for users. People should stop using it._

###  15.2.1 Accomplishments

At the end of an evening of playing around with a computer, the last thing I want is to look back with a sense of satisfaction over having "beat" Windows at something. By this, I mean I don't want someone to ask "how was your day?" and the best answer I can give is "well, I managed to get Windows to stop doing foo so I could do bar".

Why? because I want my personal list of accomplishments to be about _me_ , and the progress on something new and exciting that I've done. I want to be excited about something I invent, or dream up, or program. I don't want to be proud that I outsmarted some random programmer in Redmond whose boss told him "put a cap on how often our users can do foo". I want to be the guy who comes up with stuff no one else is doing, I don't want to become the minor day-player villain to a nameless code monkey. I want to be the main character of my own story.

That's my problem, in a nutshell, with much of closed source. They lock me out of my own achievements. Yes, I can lay claim to some pretty cool things, but only as a user. I have two choices: play their game, or don't play their game. Trouble is, I was always the kid who opted, in life, to just kinda do my own thing.

And that's not going to change in computing.

### 15.2.2 The Missing OS

People keep telling me that "most users just want a web browser", and while I do sometimes find that to be true of some group of users, there's an odd mix of activities that people do on computers, and a lot of it is outside the browser.

Which means, you have to install some applications.

Now, this is by no means unusual. I used to do it on Mac, and I do it now on Linux; it's just the kind of "power user" I am. I have a list of applications I want on a computer, and I'm pretty inflexible about that. But by gum, when you install an OS and all the default applications were written for 5 year-olds, and the amount of clicks it takes to download and install good ones nearly breaks my mouse button, then it's time to say enough is enough.

### 15.2.3 Here Let Me Do That For You

Back when I was a dumb Mac user, all my Windows friends would make fun of the OS for not letting me do anything. They meant that Apple had very strict expectations of how a user might want to interact with a computer, so in a world of infinite possibilities, Mac OS gave its users...3, at the most.

And mostly that's true. It's definitely one of the top reasons I abandoned Mac: too inflexible.

While I'll admit that it does appear to me, at first glance, that Windows is a little more liberal with options, the one thing that stands out most about its interface is the constant, persistent, eternal, non-stop, rapid-fire pop-up _things_. They appear in every corner, with everything that you launch or plug in. Heck, if someone walks past a Windows box, a pop-up alerts the user that someone is within RADAR range. I don't know how anyone accomplishes anything through the mire of helpful pop-up tips and alerts.

I'm sure pro users get into the control panel or registry and turn all that stuff off, but for that to be the default betrays a lot about Microsoft; you're not the pilot, just a co-pilot.

### 15.2.4 Stupid Boot

I'm no computer genius. I've never written a kernel, or bootstrapped a machine from raw assembly code. But I have _compiled_ more than a few kernels, and I've made an initrd (initial ram disk, or something like that) or two, so I like to think I know a little something about the boot process.

One thing I know about the boot process is that it scans hardware available and constructs a system to talk to that hardware; this is why, for instance, your OS knows how to read from and write to the hard drive it is loaded on. It's why your OS knows how much RAM it has available to it, and what CPU it has access to.

On a pre-historic OS, you might hardcode the boot process to skip hardware probing and just boot on faith. That could be a good thing for a speedy boot, but a bad thing should any hardware ever change.

And yet that's rather how Windows acts when it boots and hardware has changed. I once repaired someone's Windows machine and had to swap out the CPU. The resulting boot process was treacherous: Windows alerted me that recovery was required, then it went through some kind of "recovery" process (it didn't actually tell me what it was doing, but it promised me that personal data would be restored), then it rebooted mostly as normal, but once the user logged in, Windows prompted her to reboot again so that the new settings could be applied. And all through the process, the user was panicking, convinced that instead of repairing the machine, I'd lost all of here data.

Maybe this is normal to pro Windows users because they've never experienced anything different, but on a sensible OS, a normal boot sequence when hardware has been changed would be no different that a normal boot sequence. And that's exactly how it is, on Linux, BSD, and even (at least, at the time of this writing, but I'm not taking bets on whether it'll change, given the company's history) Mac.

Another time, I tried swapping drives on two machines. Windows detected the change and refused to boot, telling me it could not authenticate to that hardware, and so I was breaching its license terms.

I know this will upset Microsoft, but the OS does not own the entire stack. Hardware changes should be detected by an OS, but the OS's job is to adapt to it, not fight it. A change in a CPU or a graphics card shouldn't require Microsoft's prior written consent, and it shouldn't claim that "recovery" is now required, as if there's been severe system damage.

### 15.2.5 Bare Metal

Not unrelated to Microsoft's apparent conviction that they should own the entire computing stack are the latest developments in UEFI. As of Windows 8 or so, Microsoft has made serious inroads into the firmware of computer hardware, insisting that a computer be able to boot "securely". The term "secure", as defined by Microsoft, is that they hold the keys, and in this case it's a software key that identifies the OS to the firmware; a little like an SSL cert between a server and a client.

First problem here is that Microsoft does not and should not own the computer that a customer has purchased. This is a huge issue, and it's easiest summarised by stating unequivocally that it should be by default that a new computer has _no OS_ on it. I realise this means customers would have to buy the OS and install it. I don't see that as a bad thing, and in fact it's a good thing, because those who want to buy the OS can, and those who do not, will not. Those that don't want to do the install can hire someone to do it for them, and those that can install it themselves will do just that.

There's no reason that Microsoft Windows should come pre-installed, especially in this day and age of high speed hard drive imaging.

Secondly, an SSL cert (or a "secure boot" key) isn't actually _secure_ the way people think it is (assuming that most people, as they do, think that "security" means "protected by a magical force field"). These key checks can certainly verify that some entity had access to a key and has flipped some specific bits and blown some fuses, but it can't really verify anything about what those chips are up to, and they certainly can't guarantee that the OS it's about to launch is untainted.

Heck, nobody but Microsoft can verify that, and it's very much in their best interest to claim that it's pure.

Microsoft is a software company (licensed hardware products notwithstanding). They shouldn't be making firmware decisions for the bare metal, and I'm frankly shocked that any company external to Microsoft would allow it (so much for the free market, amiright?).

[EOF]

Made on Free Software.

# 16 Expectation and Intention

I saw this on a forum post the other day:

    There is no fix. No linux distro will work right with
    my monitors. Only windows does it right. Ubuntu, Debian,
    Kubuntu, Mint, puppy, and several others I've tried.

The "problem" the user was talking about is that his computer is seeing the "wrong" monitor as the primary screen, such that the desktop menu bar and other desktop _things_ are being placed on one monitor instead of the other. The user asked about it some, declined several suggested fixes, and then went on to rant about how no Linux distribution he has tried is doing it "right".

The problem (the one on the surface) doesn't concern me; I happen to have quite a bit of experience with graphic cards and multi-screen setups, so I know for a fact that there is definitely a way to configure a machine to use one monitor or another as the primary screen. The problem obviously and clearly has a solution.

No, what interested me in this post was what it reveals about how people perceive computers.

## 16.1 Definition of "Right"

We all do this at first: what a computer "should" do is defined entirely by what the computer we used yesterday did. This runs so deep, it's hard to fully grasp how pervasive it is, but I think I, personally, have a unique perspective on it. I grew up on a computing platform that did not have the majority of the market, so my critiques of Linux were always different from everyone else's critiques of Linux. Inevitably, whenever anyone I knew had an idea about how Linux could be "made better", the moment they voiced their idea, I'd shake my head, "No, no, you have it all wrong, you're trying to make it more like Windows. Here's what it SHOULD do..."

Of course, my brilliant idea would 90% of the time make Linux more like the platform I'd migrated from.

This is a tip-off: pre-conceptions of how a computer is supposed to achieve a specific task is on _us_ , and it's not universal. Not by any means.

It really can't be; for every one user, the definition of the "right" way of doing something is different, even when those users are on the same OS. This holds true for the simplest concepts, like how you launch an application, how you open a file, how you remove a file, where you keep your files, and on and on. We all compute differently, and there's no OS on this planet that can suit everyone's individual styles. There are two paths toward a solution:

  1. an OS stays flexible and customisable so everyone can use it the way they prefer

  2. users conform to the one or two ways an OS provides for performing a task

You can probably guess my preference (if not: it's the first option. I prefer the first option, where the user stays in control).

I can hear you saying "yes, but there's such a thing as a de facto standard", and you're right. If 30 years of GUI computing has trained the world to expect one thing, then it seems counter-productive to throw it out. But I think you'll find that not everything has a 30 year tradition behind it. Most of us get used to something over the course of a year, and then when it changes the next year, we pitch our computer off a balcony. That's over-reacting.

Computers hold lots of programs. Each program has the option of doing any number of tasks in any number of ways. We as users need to remember a few things: there is no "right" or "wrong", and humans are great at adapting.

## 16.2 Hardware Knowledge

The post also fascinated me because is shows a severe deficiency in understanding software and hardware, paired with a resolute conviction that these same things are not working "right".

This isn't new to me. When I was working as a computer salesman to pay for school, I encountered this daily and generally had to simply endure it. Now I see it and usually walk away from it silently, although I generally make the offer to educate just in case someone wants to learn. Sadly, when faced with a situation that they don't understand, most users seem to prefer to blame the programmer, or bad mojo, or themselves, or whatever arbitrary thing they can use as an excuse to run away from the problem.

I find it difficult to identify with this mentality, because it's just not how I react to puzzles. If my car won't start in the morning, I don't go to a garage and tell them that my car will not start and that there's no possible solution, and that model of car is basically flawed anyway; I'd go to the garage, describe the problem, and look for a solution. Since I don't know a whole lot about cars, I'd let the mechanic explain to me the potential problems and how they might be diagnosed and repaired.

But for same reason, when people talk tech, there's a conviction that they know the issue, and often that the issue cannot be fixed.

In the post above, the "problem" is that the software is working correctly; it is identifying the primary display as assigned by the GPU. Since this particular user wants the opposite of what the hardware provides by default, the user is supposed to intervene and declare a preference. For whatever reason, this user is declining to do that.

For whatever reason, Windows chooses to do something different. I don't know what it's doing, but maybe it's hard-coded to prefer DVI over HDMI, or maybe it detects and holds the first display plugged in as primary; who knows? It's probably a perfectly valid choice, and for some people, it works well. The problem is, some people obviously believe that it's the only and best way of detecting displays, forgetting that there are millions of other users out there, many with different expectations. All of this somehow gets translated to an assumption that there is a problem with the way the software and hardware are communicating. To anyone with half a programmer's brain, it's so obvious that there's not a problem, just a _difference_. Heck, to a programmer well versed in drivers, there's possibly even a low-lying "right" answer based on how the GPU declares itself to the OS.

It's no mystery as to where a user gets their absolute "knowledge" about these kinds of things: it's from whatever their previous experience prepared them for.

This cuts both ways. Just because Linux might auto-detect the "right" monitor for someone doesn't necessarily mean it's doing anything better than Windows. It just means that you happen to like the way it is defining the primary display.

The solution is to understand how the selection is being made, and to do whatever you need to do to get it to align with your expectation. Either that, or stop complaining and accept the fact that there are details that exist over your head, and that you do not have the time or interest in understanding. That's not a bad thing, but complaining loudly whilst acting like you do understand, isn't helpful to anyone.

## 16.3 Custom Orders

No OS vendor takes custom orders from every single user. Configuration is going to happen somewhere along the line. You can pick and choose the things you firmly believe should just be a certain way, but that doesn't mean anyone agrees with you.

A good OS has ways for you to set your preferences and then _keep them_ across upgrades and re-installs; a bad OS sets those preferences for you, and forces you to settle for what they give you.

[EOF]

Made on Free Software.

# 17 Analogue Random Number Generation

Some time ago, I realised that sometimes I needed random numbers to play a quick RPG or dungeon crawl adventure. This would often happen when playing a solo, scripted game, especially something like Lone Wolf or a solo adventure for Tunnels & Trolls, so I wouldn't have a die on me (or else, no place convenient to give the die a proper roll).

I needed some method of producing random numbers, within a specific range (usually 1 to 6, but sometimes 1 to 10), without dice, and without relying on my own subconscious bias.

While I was at it, I also thought that a no-electronics solution might be nice, too. Sure, you could grab a mobile phone, find a dice rolling app, and you'd be done, but I'm not fond of mobiles and part of the appeal of "tabletop" gaming, for me, is that it doesn't rely on electronics (or relies on extremely-low powered electronics, such as an e-ink ereader, which gets about two weeks of life on one charge). So I wanted something independent of heavy programming and power consumption.

## 17.1 How Random is Random?

Ask any Linux geek and they'll tell you: random numbers are harder than you think. When you get started in Linux, it seems like one of the things you get told pretty early on is that randomness in computers is basically impossible to generate, because a computer really is a closed system. Depending on what you define as a "computer", it may be a _big_ closed system, but if you're asking a computer to provide you with a random number, you'll find that the results are, by some measure, predictable.

If you introduce humans into the equation, you can do a little better; tell the computer to, for instance, check for cursor coordinates in 3 seconds, and then tell the human to draw a picture or to just start wiggling the mouse around the screen.

Sounds random, right? Well, it's strange, but the more you do something, the less random it gets. You start to notice patterns, like maybe most humans start their mouse position in the upper left corner of the screen, or maybe they tend to ignore prompts to move the mouse, or whatever.

What most people find (outside of encryption, ideally) is that "random enough" is good enough. If I ask a computer for a number and get back some number that I could not predict that I would get, then I call it "random".

But that's not actually random; it's "unexpected".

And for most fun and game applications, that's good enough.

## 17.2 Unexpected Number Generation

Let's say I roll a d6 (six-sided die) three times and get back 1, 1, and 1.

That's kind of disappointing, isn't it?

Totally random die roll, but the results aren't as satisfying as when I roll, for instance, a 1, 6, and 4.

In fact, a 1-6-4 roll looks totally random compared to 1-1-1, which just feels lazy.

So even if I really did randomly roll a 1-1-1, if I told you that I'd rolled 1-6-4, the latter feels more random to most of us than the former.

Likewise, if I asked you to pick a card from a deck, and each and every time you picked a "random" card, I was able to tell you exactly what you'd picked, the feeling of randomness fades quickly. You'll either call me a magician, or you'll accuse me of playing with a marked deck (or both); the idea being that predictability equals non-random. But if I was only right about 50% of the time, then suddenly it's random again. This is, obviously, how and why hustling works.

So, what I was looking for is a way to hustle my own brain when selecting numbers.

## 17.3 Big Numbers, Divided

The first idea I had was to mask my number selection by just drumming up very large numbers in my head, which I would then divide by 6 until I got the number down to a value from 1 to 6 or 1 to 10.

Example: I choose 762. I have no idea what that reduces down to, because I'm bad at math, so I start the calculations: I know that 6*100 is 600, and my number was bigger than that, so maybe 120? 6 goes into 120 60 times, and it goes into that 10 times. So now I have 10, but I never resolved that trailing 2, so let's call it 12 divided by 6 equals 2. There, I just rolled a 2.

That seemed pretty practical until I realised that poorly-done math could be manipulated to give me any number I needed, and math done well was just too much work. I felt that a simple die roll shouldn't take me out of a game just so I could practise long division.

## 17.4 I Spy...

My next idea I had was to just look around the room, choose something to count or some attribute to categorise and then convert to an integer value.

For example, I might arbitrarily propose that I'll look for all sources of artificial light in a room. And then I'll count ceiling tiles across the room. And then the tiles spanning the length of the room. And so on. Any number higher than 6, I might divide by 6, or 3 until I'm within range.

Or I might look at some object, identify its colour, and then take the first letter of the colour name and convert it to a number (a=1,b=2,c=3, and so on). That seemed practical, but again it was subject to a subliminal bias, or possibly just an obvious pattern, where I would fixate on certain colours, or suspect myself of subconsciously choosing colours that ranked higher than others.

## 17.5 Modulo

The problem with the choice-obfuscation methods were that there was no baseline for what is or is not "random". I can come up with numbers in my head, but how can I tell if it was predictable or not? I might not be able to rattle off the result of long division, but how can I be sure that my brain isn't playing tricks on me, making me win each combat roll?

Even so, math seemed like a promising path. Within just one or two internet searches, I found a classic old trick from early encryption that could produce unexpected numbers. It's based around the **modulo** , which is similar to the "remainder" principle in division you were taught in school.

This method requires two seed numbers to get things started, but after your first number it self-generates.

To start, choose two seeds. Let's go with 4 and 6.

Take your two seeds and add them together, dividing the sum by 6 (the number of sides on your imaginary die). If 0, replace with the sum.

(4+6=10) mod 6 = 4 (6 goes into 10 once, with 4 left over)

Now take your result (4, in this example), and add to the _latest_ seed (6, in this example) and repeat the process.

(6+4=10) mod 6 = 4

So you've rolled a 4.

Next roll, do the same: previous seed plus your result:

(4+4=8) mod 6 = 2 (6 goes into 8 once with 2 remaining)

Again:

(2+4=6) mod 6 = 1

(1+4=5) mod 6 = 1

(1+1=2) mod 6 = 2

And so on. It worked, but its results were a little annoying; depending on your seed numbers, the results could feel very random (try 2 and 3) or they could seem like you were trapped in an infinite loop (try 3 and 6).

I liked that it was, itself, eternally progressive; it self-generated new seeds after the initial ones, and it was just complex enough to discourage the brain from predicting, but just easy enough to do without stepping too far out of the game.

This method is a real contender, and definitely a gem to know in a pinch. If you have nothing else, you have progressive modulo.

## 17.6 Shifting Tables

From these ideas, I came up with the idea of a shifting table that might make predicting the result difficult. My idea was to generate a little table of numbers like this:

1| 2| 3| d  
---|---|---|---  
4| 5| 6|

To use it, you pick a number and then step through the table in a north-to-south, left-to-right progression. The first "roll" is the seed.

For instance, if I pick 3, then I count three steps from 1 to 4 to 2. Keep that in mind, and pick a number for your roll. Let's arbitrarily choose 3 again.

First, shift the table by 2 (this produces the "unexpected" part of the equation):

2| 3| d| 4  
---|---|---|---  
5| 6| 1|

Then "roll" 3 spaces: 2 to 5 to 3.

The result is 3.

For the next roll, let's choose the number 5.

First, shift the table by 3 (our previous roll):

5| 6| 1| 2  
---|---|---|---  
3| d| 4|

And move 5 (our chosen roll): 5,3,6,d,1.

So we rolled a 1.

And so on.

The value of **d** is "slide up or down". So if you land on **d** , then slide one space up, which in this case would have rendered the result of 6.

This method works well enough; it's low on math, easy to use, and it's pretty unpredictable since you are constantly shifting the position of the numbers within the table. The trick really is that you declare your roll _before_ you shift the numbers, so you're tricking your brain and forcing unexpected results.

Its weakness, I felt, was how linear it was. I was afraid that after a few rounds, I'd get used to how the table shifted, even if only approximately. There's no real way to re-seed, because no matter what, the shift is in one predictable direction, without variation.

## 17.7 Pocket Dice Roller

I liked the ideas of _unexpected_ numbers over random numbers and progressive seed generation, but I still wanted something as simple as a dice roll (simpler than maths). I kept these notions in mind, and then two things happened: I attended a zine-making workshop, and I downloaded Dungeon Delvers.

At the zine workshop, I learned about a foldable, single-sheet booklet, which online seems to be called the "pocketmod" but I don't know who invented it. And then when I downloaded **Dungeon Delvers** , I found that its rules came as a "pocketmod". It all came together: I could use this funky zine trick to create a gaming utility!

From there, the concept was obvious to me: a small booklet with a table of numbers on each page. Seed the process with any number; turn to whatever page number you have in mind, and see what number is there. This is both your "roll" result and the seed for your next roll.

That concept quickly developed into a system using letters and numbers (using numbers for both page number and the roll result gets confusing), multiple choices per page (your letter choice resolves to your die roll, the die roll becomes the destination page for the next letter you choose).

To make things even less predictable, I made the booklet flippable and floppable, so at any point you can "mix up" your results (that is, confuse your expectations) by turning the book over, or flipping the book upside-down. This provides 4 entry points, each of which with a different result.

Since your seed is independent of your roll choice, you can change the direction of your "roll" at any time, effectively pulling the rug out from under yourself if you feel that your brain is remembering pages.

The booklet is, like a card shark's deck (cutting a deck of cards does not shuffle, it just offsets the "starting" point), an infinite loop. There are no page numbers, so you can start counting from any page that you arbitrarily call "page 1", and get a completely different result than if you performed the same roll from a _different_ "page 1".

To make it more versatile, I also added d10 grids to each page, meaning that the book serves as a d6, d12, d10, and d20 roller.

The booklet prints on one sheet of paper, so it can be made easily at home, and it fits nicely into your wallet.

Since many of my RPG materials are independent or from small publishers, a lot of it comes as PDF or (thankfully) EPUB. I figured there was probably also a market for a dice roller that is non-calculative (since many ebook readers do not run even simple scripts without a considerable amount of hacking). For that audience, I adapted my Pocket Dice Roller to `.epub` (it's written in Docbook, so really it can generate any half-way sensible ebook format).

In a way, it's simpler than the paper edition, since there's never any sense of where in the book you are (unlike the paper one). As long as you just keep pressing letters and numbers, the seed is unexpectedly shifted, and get an unpredictable roll result.

The solution is a simple example of analogue programming. As long as the user is invested in not knowing the starting point, the user can "randomise" the seed and produce unexpected and unpredictable results every time.

Hopefully, it's a useful system. It's published as a Creative Commons project, so improvements and variations are welcome.

The source and printable renders are available from gitlab.com/notklaatu/pocketdiceroller.

[EOF]

Made on Free Software.

# 18 Classes and Functions

If you came to Python from a scripting perspective, as some sys admins and most beginning programmers do, then the concept of functions and classes can be difficult to understand and leverage. Some teachers describe a class as an "object", which in some cases does reflect how classes behave. Others call a class a self-contained (but not necessarily _self-sufficient_ ) data set, which also describes classes pretty well. Mostly, the best way to understand classes and functions is to use them, so let's learn by doing.

## 18.1 No Functions, No Class

First, let's see what a Python application looks like without classes or functions. This is how most new programmers start, or sys admins who are used to BASH:

    #!/usr/bin/env python

    FirstVar = 1 
    SecondVar = 2

    print(FirstVar + SecondVar)

(This would of course print **3**.)

The variables created here are global variables; they just hang out in common areas and are available to anyone that needs them. It's simple and convenient. However, it can get messy and confusing if you are dealing with lots of variables, and in larger applications it can also get unnecessarily memory-intensive.

## 18.2 Function

Functions are most often used as formalised methods of repetition.

If there is a task that you know needs to be done again and again in your program, group the code together into a function and call the function as often as you need it. This way, you only had to write the code once, but you can use it as much as you like.

The other reason you might use a function is to keep variables insulated from the rest of the program. There is no need to keep holding temporary variables in system memory or in namespace when you only needed them to hold a bit of info while you did some math. Isolate them in a function, use them as the function is running, and then let them fade away when the function ends.

Here is an example of a function, with an emphasis on how it deals with variables.

    #!/usr/bin/env python

    worldVar = 12

    def Boxxy():
        boxVar = 1328
        print(boxVar + worldVar)

    if __name__ == "__main__":
        print(worldVar)
        Boxxy()

    print("Failing...")
        print(boxVar)

Run the script:

    $ python ./boxxy.py
    12
    1340
    Failing...
    Traceback (most recent call last):
    File "./boxxy.py", line 16, in <module>
       print(boxVar)
    NameError: name 'boxVar' is not defined

Notice how everything but the final request works perfectly. The **worldVar** is a global variable so it can be accessed from anywhere. No big deal there. The **boxVar** , however, is confined inside of its **Boxxy** function, so the final attempt to **print(boxVar)** fails, since there is no such variable outside of the function in which it was created. It is considered a _local variable_.

## 18.3 First Class

Sometimes functions are not enough, because you want your local variables to be shared among several functions, but not with every part of your program, or you need to design a component of your application and then call it into being several times.

You can isolate variables by creating them in **classes** and **methods**. By doing this, they only get used when needed.

    #!/usr/bin/env python

    class FirstClass(): 
        def __init__(self):
            FirstVar = 1

    print(FirstVar)

This will fail:

    $ python ./classy.py
    Traceback (most recent call last):
    File "./classy.py", line 7, in <module>
       print(FirstVar)
    NameError: name 'FirstVar' is not defined

This is telling you that the variable FirstVar, essentially, does not exist. It does not appear to exist because it was defined within a class, and so it is closed off from the rest of the world. This is great when you need to create a bunch of variables just as temporary holding places while you work out some self-contained problem. Or, more importantly, when certain variables _must_ only apply to that class: for instance, in a video game you might create a variable to represent gravity. This variable constantly pulls an object toward the [game's] Earth, just as it does in real life. Unlike real life, however, video game gravity if applied globally would also cause the earth itself to fall, so you certainly would not want that gravity variable to affect objects outside of your player avatar or else everything would fall right off the screen. So you _must_ insulate that variable.

## 18.4 Getting Things Into a Class

There are two ways to get a variable into a **class**. The first way, you have just done: create a **class** and inside of that **class** , create a variable. Unless you deliberately send it out, that variable will stay forever locked within that **class**.

The second way to get some information into a **class** is to send it there.

Now, remember, **classes** and **functions** are barriers. By design, variables cannot just walk out of a **class** or a **function** at will, nor can they walk right on in uninvited or with the right credentials. To get a variable into a **class** , you need to pass the variable in as an argument. I am not a programmer by training, so that's a fairly obtuse concept for me, but if it helps, you can sort of think of portals. The portals you will use are parentheses: ()

For example:

    #!/usr/bin/env python

    class First(): 
        def Blue(self,portal):
            print('running Blue method')
            print(portal)

    if __name__ == "__main__":
        Player = First()
        Player.Blue(13)
        print(Player.Blue(14))

In that example, we create a class called **First** and a method within that class called **Blue**. At the very bottom of the application, we call upon the **First** class to run, first by assigning it to the variable **Player** , and then again by printing it with the Python **print** function.

But most importantly, notice the parentheses in the method declaration: _this_ is the portal through which we can send information from the outside world. First, we submit **self** to the method.

The second thing we define for our portal is _some placeholder_ for an incoming variable. It doesn't have to be any special word; I call it **portal** here just to be demonstrative but I could have called it **penguin** or **myVar** or **whatever** (or whatever). The important thing is that the method now knows that in can expect (and indeed, _demand_ ) a variable to be handed to it whenever it is called.

In the part where we actually use our class, we submit an argument; first, we provide the integer **13** and next **14**. As you can see from the results, the function takes in the argument as **portal** and prints it out for us, doing this regardless of whether we invoke the **class** on its own or use it from within some other function like **print**.

## 18.5 Making Functions in a Class Communicate

Notice the problem we get when we do something like this:

    #!/usr/bin/env python

    class First(): 
        def Blue(self,portal):
            cake = portal
            print('running Blue method')
            print(cake)

        def Orange(portal):
            print('running Orange method')
            print(cake)

    if __name__ == "__main__":
        Player = First()
        Player.Blue(42)
        Player.Orange()

That code renders an error because the **cake** variable is not known to the Orange function. To let **Blue** and **Orange** share data, you need to use **self** ; it is one of **self** 's jobs to facilitate inner-class communication. So to fix this problem, you need to do two things:

  1. bring **self** into both functions that need to share data, in this case: **def Orange(self):**
  2. make the data needing to be shared an attribute of **self** , for instance:

        self.cake = portal

and

        print(self.cake)

After making those adjustments, the program looks like this:

    #!/usr/bin/env python

    class First(): 
        def Blue(self,portal):
            self.cake = portal
            print('running Blue method')
            print(self.cake)

        def Orange(self):
            print('running Orange method')
            print(self.cake)

    if __name__ == "__main__":
        Player = First()
        Player.Blue(42)
        Player.Orange()

And so it outputs **42** as expected.

And as you can see from the code, you only fed Python the number **42** once, and it retained in even when you called upon the class later to run the **Orange** function.

 graphic showing the flow of data

### 18.5.1 Getting Data Back Out

Sometimes you need to get data back out of a **class**. Maybe you need to get the results of an equation, or maybe you don't need to see the data itself but you do need a **Boolean** , like **0** or **1** just to see if something worked.

The ****init**** function of Python returns **None** , and is always expected to return **None**. Like Unix itself, Python assumes that silence means success, so if you have ****init**** return anything but **None** , Python is forced to assume that ****init**** (the very function that spawns a class object when a class is called) failed.

You don't want that, so do not try to get feedback from an ****init**** function.

All other functions are fair game.

If you are running a **class** in order to get information out of it, then what you really want to do is find _the result of_ the class. When ever you hear or think the phrase "the result of" in programming (not just in Python) you can bet there will be a new variable involved. Why? because you give the computer data via variables, and the computer gives _you_ data via variables. Variables is how you and the computer communicate. Get used to it.

We have this program:

    #!/usr/bin/env python

    class First(): 
        def Blue(self,portal):
            print('running Blue method')
            print(portal)

    if __name__ == "__main__":
        Player = First()
        cake = Player.Blue(42)
        print(cake)

Run that and you will see that the final print statement renders "None". This is because the **First** function, as which all functions, returns nothing by default. It is safe to assume that getting silence from a process means that it successfully ran, because when programs fail, they spit out error messages.

But in this case, we want the program to return a value for **cake**. So make it do just that by adding the line **return portal** to the end of the function:

    #!/usr/bin/env python

    class First(): 
        def Blue(self,portal):
            print('running Blue method')
            print(portal)
            return portal

    if __name__ == "__main__":
        Player = First()
        cake = Player.Blue(42)
        print(cake)

Trace the flow of data and you see that now the input ( **42** ) gets fed into the portal argument of the **Blue** function, it is printed, and then it is returned as **portal**.

Where is it returned _to_ exactly? well, it is returned as _the result of_ the class, so it is dumped into the **cake** variable.

This is why, when we print **cake** , the number **42** appears.

You can, of course, return anything you like. You do not have to return the same thing that you fed into the **class** in the first place. For instance, try **return 100** or **return time.localtime()** (well, you would have to import the **time** module for that) and see what happens when you print **cake**.

## 18.6 Now You Know

That's everything you need to know in order to comfortably move data into, within, and out of a Python **class** or **function**. The more you practise, the more natural it will feel.

Have fun!

[EOF]

Made with Free Software.

# 19 Don't Panic

There's a disconcerting tendency I've noticed when my people try, unsuccessfully, to do something multimedia-related on Linux: they panic.

I'm not talking about "panic" in the sense that I have a low threshold for helplessness, I'm talking about otherwise intelligent people acting like there's no known solution to a problem as small as audio not being audible through speakers...and failing to notice that they left their headphones plugged in.

I say this is a trend because I've witnessed it several times among "normal" people who use Linux on a daily basis; co-workers using Linux as their workstation, professors teaching computer science, geeks who use several operating systems. And it does seem to be fairly specific to multimedia activities, for some reason; they're happy to troubleshoot why a shell variable isn't updating correctly, or why a library isn't compiling, and any number of issues that would send the everyday user running in fear, but the minute they open a web browser and a youtube video fails to play, they toss the mouse aside with a sigh, saying under their breath, "Oh, Linux..."

I witness this pretty much on a weekly basis at this point, mostly because I work in an industry that uses Linux on a massive scale. I have personally witnessed perfectly intelligent users:

  * blame poor youtube frame rates on Linux, switch to a Mac or Windows box and discover that actually, it's just the network. Even funnier? on more than one occasion, the Mac or Windows boxes refuse to play the audio (true story); once, the user just settled for the same poor frame rate and no sound rather than switch back to Linux to have the same frame rate _with_ audio, either because he was too embarrassed to admit he was wrong or he was just pressed for time.

  * give up on playing music on Linux because of "audio issues" which turned out to be headphones left plugged in.

  * give up on playing music on Linux because they left Kmix muted.

  * bemoan Linux web video compatibility due to temporary network issues.

  * chat about how difficult it is to edit video on Linux whilst waiting for a presentation on editing video on Linux by a professional editor.

I assume this is indicative of something.

## 19.1 Drivers

There's this stupid thing about some hardware manufacturers where they fail to publish the code required to actually _drive_ the hardware. I could go on and on about this, but complaining'll get us nowhere, but that's the state of things. Still.

Usually, some very brilliant developers manage, somehow, to work with that. But there are times when audio drivers fail. When I worked at Apple for a year, I was shocked to find that this actually happened during development of Apple computers, too, but the difference is that when it happens on the vendor side, then the vendor just calls up the hardware manufacturer and either gets code that works or gets a programmer on site to fix the issue.

Most people using Linux, or trying to use it, are installing it themselves, so they haven't got the luxury of just calling up the designer of the audio chip to get things squared away.

So sometimes, when you're installing Linux, you have to do a little bit of manual configuration. You have to jiggle the virtual wires. Historically, that often meant "open alsamixer in a terminal..." type replies in forums, and now it tends to be "set up a .asoundrc file..." hacks.

That's something I guess we Linux users will be living with until...

  * vendors publish appropriate, preferably standardised, specs for
  * their hardware non-technical users accept that Linux can be purchased pre-installed

I prefer the latter option; it feels unjust to earn a reputation for poor drivers when the obvious answer is pretty much the same to every other computer problem a member of the general public encounters: buy a new computer. I'm not saying that's the _right_ answer to computer problems, but I am saying that too often people look at Linux as a fallback OS for an otherwise useless computer, but complain when they have to configure it, or don't get the performance of a new computer just off the shelf. They aren't _wrong_ , but they aren't exactly _right_ , either.

All of that aside, the impression remains: if there's a problem on Linux, blame a driver. If you don't understand drivers, panic.

## 19.2 Practise

Sometimes there's nothing wrong at all. Sometimes it's just a matter of remembering the basics of a typical computer interface.

I think the problem here is lack of _latent comfort_ (if I may invent a term) with the UI. You know the sort of comfort I mean: the kind of comfort that you become aware of when you think "I should adjust the volume" and then you look up at your volume icon and see that your mouse is already there adjusting the volume. That kind of non-thinking "oh yeah...I knew that" comfort.

Lacking that, when a problem arises that can be solved easily but that only gets done every now again, it comes across as intimidating at best and insurmountable at worst.

That's the long way of saying that practise makes perfect.

## 19.3 Stopping the Cliche

No matter what, it seems some clichės just won't go away. I don't know how they get started; some are marketed very deliberately, others are coerced into the public's mind. I never saw, for instance, an Apple Mac ad saying that Mac was "better for artists" specifically, but somehow that's the reputation it got. I feel like a lot of these get reinforced by clueless computer sales people at (name your least favourite big box store), but that might be a bias I developed from working at computer stores back in school. Still, I can just hear them explaining the world of home computing to a customer: "Well Windows is what everyone uses and it's good for businesses, and Macs are for artists. Oh Linux? no no you don't want that, that's for servers."

Oh, the pain.

Still, the trite definition, the total absence of reason, the lack of explanations, they stick. When something doesn't play on Linux, the "obvious" reason is that Linux is an impostor, masquerading as a desktop. It doesn't _really_ belong on a PC, it's for a server. Quick, switch over to a "real" PC operating system. Why? well, so there's less panic when something goes wrong.

Yes, we don't switch because it will eliminate problems, we switch because we feel less uncomfortable when there is a problem.

The answer is actually pretty simple: don't panic. Stop, think about how computers work, and do some simple troubleshooting. Do it every time it happens, and before you know it, they aren't problems; it's just part of the computing experience. Yes, you have to un-mute your speakers to hear sound, and you have to play a video in a player that can play the kind of movie file you are trying to watch, and you have to plug in your headphones to get sound through your headphone.

[EOF]

Made on Free Software.

# 20 Making the Simple Complex, and Charging for It

I recently found a very subtle, basically innocuous example of how a simple system can be made unnecessarily more complex. In itself, this isn't that big a deal. It's something I would probably get annoyed at initially, and then forget about once the lesson has been learned. But looking at it from a different angle, I realised that in a way it's a metaphor for something larger and more problematic.

So, the example is this: environment variable in Linux are sublimely easy to set. If you're a BASH or Bourne-like shell user:

    $ export FOO=bar

Or if you lean toward a C-style shell:

    $ set FOO=bar

Either way, you have just created and set a variable called `$FOO` to the value `bar`.

Not a big deal, except it is; environment variables define the very reality of a Linux shell and are hugely powerful. The fact that they are so powerful but so simple to set and use, and so flexible (do you want the variable to last for just one command, or for the duration of your current shell and its sub-shells, or now and every time you ever log in?), makes controlling Linux fluid and dynamic. A well-versed shell user can define and re-define environment variables between commands; and some do. I do, myself, because I package applications for others to use, so I frequently set variables before building, or I set variables prior to running a test, or I set variables at work to define what version of a multi-version install of an important application.

## 20.1 How to Break Simplicity

This system works really well. It's available to users for personal work, it's available to admins to define presets for all users, it's dynamic, it's scriptable, and most importantly it's really simple. I mean, by definition, it's simple. Key/value pair stored in memory during a session. Can't really get simpler than that.

So how can we make this beautifully simple system and needlessly complicate it?

I don't know why we would want to complicate something so effective, but as a mental exercise, the first thing we could do is to take direct access to environment variables from our users. No more shell-based interface for them. Instead, we'll introduce a GUI, in which values can be viewed.

But just storing those values in a text file sounds pretty simple; that is, after all, what files like `.profile` and `.bashrc` do, so storing the key/values in a text file is too easy. Instead, let's introduce a syntax to define what the key/value pairs are, and then we'll define the variables, and store it in a special hidden directory, and we'll do all of this in XML, because nothing says "over-engineered" like using a verbose format like XML for a simple key/value pair.

So what we have done now is, essentially, made a Windows Registry out of an otherwise simple system. Would there be hacks around this? yes, of course; we haven't made it impenetrable, we've just over-engineered something that has successfully been in use for decades. We have applied the "if it ain't broke, fix it" principle to something that definitely was not broken.

## 20.2 Simple is Pretty

Don't get me wrong: I'm aware that there's "simple" and then there's _simple_. We see it every day, and it usually has everything to do with familiarity.

For instance, if someone asks me to install Wordpress, because it's too complicated for them, I can honestly and humbly say "Sure, that's simple" because I know the alternative. It could be hard to install; it could be a project that requires me to install lots of patched PHP libraries by hand, and then copy over files, and re-create a directory structure that devs forgot to include in the tarball, and then edit a bunch of config files, manually create a database from scratch, alter the way my web server runs, and then hope against all hope that it keeps running. But for this user who doesn't do server maintenance every day, it's not "simple".

Wordpress has a "famous five minute install" or something like that; it's seriously simple. In fact, it can be _even simpler_ , because if you pay for a web host, nearly all hosting providers have a one-click install system that spins up a Wordpress instance (database included) for you. In one click. And yet even this is too complex for some people, because they don't even understand the concept of servers and web hosting and installing software on The Internet.

The problem? Either nobody is teaching them or they aren't willing to learn.

Either way, that's fine. If something simple to me seems complex to you, that's OK.

However, there is a difference between something _simple_ and something that rides on familiarity to appear simple. Introducing a GUI application to parse and view an XML representation of clearly delimited key/value pairs, and removing direct access to these definitions, is not making something simple, it's just bringing something into an existing paradigm, even if it doesn't belong there.

## 20.3 And Passing the Charges onto You

Who in their right mind would do such a thing? Well, to be fair it happens more than one might realise. The world is full of "user-friendly" front-ends to interfaces that are, to people who are familiar with them, simple. We could spend all day arguing over whether any one of them actually has helped or not.

That said, the interface that I've described doesn't seek to improve interaction with our formerly simple system; in fact, it doesn't even provide an interface for creating new variables,. That, users will have to do on their own, still, but they will not be able to do it dynamically as in the BASH or TCSH shells, they must use a text editor to generate XML and then store that XML in a specified location.

So why would anyone do this? it seems kind of crazy. For the record, what I have described is, in fact, exactly how Apple implements environment variables for any ENV settings that need to be passed on to its GUI environment. You can read all about it in their Developer Manual, because why would normal users ever need to have access to that information?

The short answer is, I have no idea why anyone would do something like this. Surely Apple could have just integrated UNIX environment variables into Cocoa. But the question is larger than just why Apple took the long way round to complexity in this one case; the question is whether or not people pay for simplicity instead of knowledge.

And the answer is, I guess, that yes, people pay for an appearance of simplicity. And they pay for it even when what they are getting is not actually simple; it's removed out of reach. It's something they don't have to learn, but still its reap the benefits.

And that's the disturbing thing about technology, I guess. As it gets fancier and more advanced, people take it more and more for granted in the same way we take clean water for advantage. Should we all have to learn about these low[-ish] level systems that can pretty much get taken care of for us, if we let it? Obviously not. Would it be better to learn about them? yes, certainly; knowledge is good, and long-lasting, and powerful, and empowering. Is this knowledge readily available? probably, although to be fair I don't personally know where to begin to learn about clean water aside from condensation harvesting and boiling.

So what am I complaining about?

My concern is that the needless complexity is being _sold_ as an improvement, as a feature, when it is clearly in no way a feature. These "features" are often companies doing their work poorly, and it gets sold as a benefit to users. Think about the old "WinModems", those non-standard network cards that worked brilliantly with Microsoft machines because they went against every standard on record, but were so thoroughly tied into Microsoft's code base that the OS could fill in the missing bits and pieces. Think of modern webcams and scanners and printers; there are great existing _universal_ driver specs for these sorts of devices, and yet still, to this day in the year of our Lord 2016, devices are being sold that will fail to work on certain operating systems.

But this is a feature to most people, because the devices are cheap, maybe, or they're guaranteed to work _for now_.

And since people aren't being taught about computers, or they aren't choosing to learn, they don't see how tech companies are charging money for jobs that are incomplete. The people who do notice this are the outliers, using the systems that are not compensating for the poor design choices, and when they take note of the inadequacy, their voice are either lost in the crowd of "it works for me" or they're assumed to be technological extremists (which we may well be, but that hardly changes the crime we're pointing out to everyone).

## 20.4 Keep Hold

The crux of the problem is, more than anything, the very real danger that society is starting to operate in ways that its populace do not understand. The complexity is getting so overwhelming that people can't be bothered to learn how it all works, and they instead grow up ignorant to the truth behind what has become, in many ways, magic. I'm not talking just about technology, here; the political system, the economic system, the banking system are all equally at fault. And somewhere along the line, people aren't getting educated on what they need to know in order to function freely. I'm not saying everyone needs to know everything, but I am saying that people should know where and how to find real information on the things that matter to them. It should be available, it should be open, it should be as full of liberty as most of us claim we aspire for our societies to be.

[EOF]

Made on Free Software.

# 21 OS or Distribution

Why are there so many Linux distributions?

This is a question asked by every new user. But it goes deeper; what _is_ a distribution? what do the people creating a distribution _do_ , exactly? why are there so many? what's different about them? when is a distribution a distribution, and when is it just a "remix" or "re-spin"?

In fact, the answer is surprisingly simple: distributions don't exist the way you imagine they exist.

Here's what you think (I know, because I used to think this, too):

You think a distribution is an OS, and you define an OS as a disc you get from a company, which you put into your computer, install, and use. There are two operating systems: Mac and Windows. Oh, and a third: Linux. When you boot a computer, the OS appears, and you do your work; you install applications, you use them, you print, you email, you browse the web. When you're done, you shut down.

OK. So, that's basically what an OS is but it's not really what the word distribution encompasses. Imagine a _distribution_ like this: you buy an OS, and then you hand it off to someone else. They disassemble the OS, make a bunch of customisations to it, burn it back to disc, and give it back to you. Then you install it, and start using it as above.

As you can see, the process is very similar, but in the distribution's case, there's that extra part where somebody somewhere configures some stuff for you. And that, in a sense, is the beauty of a distribution; instead of you having to figure certain things out, it may already be done for you, depending on what distribution you choose, or a distro may at least get you a third of the way there. And "there" is defined as _whatever it is you are looking to do_.

It's multi-tiered, actually:

  * Looking to install Linux? well, a distribution that boots to a usable system gets you 100% of the way there.
  * Looking to install Linux with a full desktop with a wallpaper of a grassy field and blue sky? any distribution with KDE gets you 98% of the way there, because you will have to go find the wallpaper you want, download it, and change your wallpaper to that.
  * Looking to install Linux with a full desktop, grassy field wallpaper, a menu bar located at the top of the screen and keyboard mapped for quick application access? most likely a distribution with the KDE desktop will get you 80% toward that goal. You'll have to find the wallpaper, set it, and then modify the keyboard shortcuts.

And so on. Generally speaking, the more specific your requirements, the less any distribution meets them. For instance, it's super easy to find a distribution that will result in a computer that boots to a desktop. If you rummage around a little, it's easy to find a distribution that will make an ancient old junk PC boot. But if you start spouting off orders like "it must have a purple desktop, and rounded icons, and a needlework application pre-installed" then it does get difficult to find exactly what you are looking for.

That doesn't mean there's not a distribution out there that isn't going to try to meet your demands, though. Since Linux is both independently produced and costs $0 to re-distribute, it doesn't take much for someone to spin up a "distribution" containing all of their demands and expectations, and to post it online as a "distribution" of Linux. They aren't wrong; they are distributing Linux.

But that doesn't mean it's going to be what you want, and at some point in your search, you have to stop and ask what's costing you more: searching the entire internet for your OS soul mate, or picking one and customising it to be what you want.

## 21.1 Your OS

Linux and open source are not fast food restaurants. They're salad bars. You can customise it to fit any expectation and any requirement you arbitrarily declare vital. If that sounds like work to you, consider that you probably don't exactly run the out-of-the-box OS that comes with your computer. You know that OS, and so the changes you make to it feel natural. You don't know Linux, so everything feels like work. Heck, finding your "start menu" feels like work.

So don't think that you have to customise everything right away. Work on it. Let it develop. You get to keep all of your changes, so let them accumulate. Let Linux grow with you as you learn it.

I'm not ashamed to say that my early Linux desktops all stayed as close to my former OS as I could possibly get them. And back then, it was never enough. I spent way too much time trying to get little nuances to be exactly like my old OS.

And then one day a funny thing happened. I realised that I'd done it: I'd reached the point that my Linux environment was a perfect clone of my old OS. I was so proud of myself. But wait, that's not all. That same day, I sat down at a computer at the school where I worked, and fired up the OS I used to use, because I wanted to marvel at how totally _the same_ they were. Much to my surprise, everything I remembered about that OS, I'd made up. I had invented all kinds of things, over time, that were definitely the way the other OS worked, so I made my Linux work the same way, and I'd been so proud, because I was "fixing" Linux.

And in fact, I was fixing Linux. I was fixing it for myself, based on ideas that were being invented in my own head as the way a computer _should_ operate. Little things, like having a dedicated system hotkey (separate from ctrl and alt, which are owned by whatever application is in focus), or like having conventions for file naming schemes, and the way applications get installed and how their sources get stored for later. I had come up with a very specific and natural way for me to use my computer, and I'd made Linux conform to that, and for some silly reason I was convinced I was basing it all on my old OS when in fact I was basing it on all the _fixes_ to my old OS that I had so badly wanted to make to it that I jumped ship and switched to Linux!

And that's what Linux should be for you: it should be your old OS, fixed.

[EOF]

Made on Free Software.

# 22 Solo RPG and Tabletop Gaming

It's one of those unsolvable paradoxes; was I lucky to have gotten into tabletop gaming after I'd acquired a gamer girlfriend, or did I finally get into tabletop gaming only _because_ I got a willing partner? As an extension of that question, what would I have done had I decided to explore tabletop gaming prior to finding a steady partner? could I have still explored the hobby?

This all relates to possibly the most basic and important RPG puzzle of all: what do you do when you have no one to play an RPG with?

Well, as it turns out, yes, there are some "solitaire" equivalents out there in the fancy tabletop game market. There are also games that are meant for two or even more players that you can, in a pinch, adapt for solo play just by stepping in as both or all players. Is playing a solitaire game as fun as playing a game with others? well, I guess it depends on your ability to enjoy time spent alone, in your own head. I personally love it. I'm quite happy to privately obsess over a gameworld, and the characters I build, and their personal development through the encounters that they have in a game world. I don't need an audience to enjoy that, I don't need other people to validate the game, and I don't necessarily need the camaraderie to enjoy a fantasy world. Solo tabletop gaming is definitely a viable option for me.

On the other hand, those other-player perks are nice to have, sometimes. So I'm not saying that solitaire is the _only_ way to game. Not only do other players add a social dimension to the game experience, they also add degrees of entropy that AI and truly cannot approximate.

With that said, here are some games I've found to be really good for solo tabletop gaming, and also some tricks and ideas for converting a two-player game down to a single-player one.

## 22.1 Dark oCCult

One of the problems with solo RPG is that one of the main game mechanics of RPG is the active narrative. The players around the table each speak aloud what their character is doing, and what each player speaks aloud determines how the game goes.

You could narrate your solo game, but your narration doesn't affect anyone else (because there is no one else around), and your narration is not structured by anything external of your own head. Even if you write down two paths in which your story could go, and then roll dice to determine which path you have to take, you've still only made a choice between possible narratives from your own mind. There's been no external influence on your "game". In short, there's no game happening here, it's just a really inefficient way to write a short story.

So how do you turn this process into a game?

The first solo-tabletop game I ever played was my own personal Creative Commons revival of an out-of-print 1983 card game called Dark Cults. The game is long forgotten, but there's a site somewhere out there on the internet that had posted scans of the original deck and rulebook, so I re-created the deck with free art assets and transcribed the rules, including the single-player mod.

The game is a story-telling game, and in a way, you're actually playing as the GM (game master) to a non-existent role player. I guess if you really wanted to, you could play as the character, narrating the game in the first-person, but I just play as the GM, telling the story of the character.

In a nutshell, the game play goes something like this: you imagine a main character or draw from the pre-built character deck, then you start drawing from the main story deck, structuring a storyline around the cards as they happen. Eventually, the cards demand that you draw a Threat card, which may lead to a neutral or an evil encounter with some Character card. An encounter with evil always leads to an End card (if you're brave) or to a Save card (if you're conservative): these don't end the game, they just end that "chapter", meaning that your character either escapes or dies. Assuming your character has escaped, you start a second storyline, continuing your character's journey through the dangerous world.

The game is nicely self-contained. It's a big stack of cards (which get split into five or six decks, so it does require table space), and that's it. No dice, no tokens, no stats. It's just the cards and your imagination.

It's the perfect single-player game. It's atmospheric, it's easy to play but requires enough imagination to impose a story onto the cards you draw, while providing just enough guidance and randomness to prevent you from just sitting around inventing a story in your head. As long as you enforce some basic minimum-sentence requirements, it never becomes mindless card-drawing, either; it does force you to put thought into justifying how the adventure is taking place.

The cards are the obvious thing that makes the process a game; they provide prompts, restrictions to what your character can do, and unpredictable results to even your best efforts to spare the life of your sometimes over-adventurous character. But there's some free will, here; you can always gamble on how far you want to tempt fate. There's always the Save card that you can fall back onto, but you'll probably avoid that, because it's a 100% guarantee that your character survives the night, and who wants 100% guarantee? More likely, you'll tempt fate and play cards right up until an evil encounter, at which point there's a 50-50 chance of survival. It's just thrilling.

### 22.1.1 Problems

In terms of pure RPG, it's the binary randomness of Dark oCCult that could be seen as its greatest weakness. You don't build your character, you don't actually build your character, and there's only three ways the story can end: your character is Saved and goes home for the night, or your character either Escapes or Dies, depending on a single card draw.

Is that any different than dice-controlled combat? not really, it's arguably just a more efficient way to the same two possible outcomes.

Does it feel less RPG than dice combat? yes, it feels very binary. Sometimes you feel especially cheated when a character is armed with a gun and yet still gets overcome by an escaped convict.

Is that a problem? Well, not really. Life happens in funny ways, and the way I see the 50-50 split of the End card deck is that it reflects how real life actually plays out. I think we like to think that life gets determined based on stats and dice rolls and bonuses, but I've found that the 50-50 card deck feels a lot more realistic, in the end.

### 22.1.2 Mods

At the time of this writing, I am slowly working toward an optional mod of Dark oCCult that will incorporate more RPG elements; I already incorporated the pre-built characters (adapted from the OpenD6 adventure rulebook), so there are characters with defined skills and backstories to start things off. It's not too hard to assign those characters stats, to attribute stats to evil Characters, and then allow the player to determine the outcomes of encounters with dice.

A more obvious mod is to create new cards. Sure, **Dark oCCult** has 12 locations (11 from **Dark Cults** and one that I added), so why not make 4 or 8 or 12 more for yourself, and swap out location sets (you can't just throw in new cards, or else you'll throw off the ratio, so swap cards out instead). Create an alternate set of Atmosphere cards, or Object cards.

Better still, as proposed by Kenneth Rahman (the creator of **Dark Cults** ) himself in the original rulebook, why not create an alternate deck entirely? His original deck was based on a kind of Horror Movie setting; an AnyTown setting with the usual assortment of urban bad guys. What if you made a deck filled to the brim with science fiction story elements on an alien planet? or a fantasy deck? or a crazy mad scientist deck with a crop of 50's B-movie tropes? or how about a nonsensical cartoon world? The possibilities are endless.

### 22.1.3 Verdict

Dark oCCult is a perfect single-player game; it inspires you to be imaginative, it's got the perfect mix of game-to-story ratio, and it scales to 2+ players when you have friends around.

## 22.2 Combat Heroes

Not counting vicarious observations of my friends back in elementary school, the first RPG material I personally owned were two paperback books called Combat Heroes: Black Baron and Combat Heroes: White Warlord. I couldn't figure out how to play them and had no patience for reading the rules, so I mostly just looked at it and let my imagination run wild; I lost track of the books but never forgot them.

It turns out that there are four books in the **Combat Heroes** collection, in sets of two pairs. They can each be played as a simplified solo game, or they can be played with two players, with one book being played against the other.

The book is light on narrative, consisting mostly of full-page illustrations of corridors; the story is that you're in a maze, and must navigate through it to find 9 treasures (or kill your opponent, in two-player mode). It looks very much like a storyboard for a video game, and amazingly it plays like one, too.

It's a brilliant design with surprising flexibility in mechanics; between the solo and two player games, you can move, hide, attack, and interact with objects to find treasure. The complexity somewhat boggles the mind, and it's actually completely playable with a good PDF viewer like xpdf, which has single page turns and page turns in increments of 10, so you can move back and forth between pages quickly as you navigate the maze, look up riddles and results, and so on.

### 22.2.1 Problems

The solo game is, admittedly, more limited than the two-player mode. You move through the maze, solve riddles to collect treasure, sometimes endure traps, and that's pretty much it. It's not as complex as the two-player game, and I don't see any comfortable way to play it as a two player game aside from flipping desktops from one PDF to the other. Granted, the books are complex enough that this tactic might actually work; but it would obviously change the dynamic and become far more complex, since you'd be managing the stats and inventory of both characters, plus their location on the map, and so on. Not impossible, but it might become a lot more about logistics than fantasy, which you may or may not want from a game.

There's not much replay-ability as a solo game; once you experience the maze and riddles once, you might go through it a second time just to see what happens if you play it a little differently, but there's no AI and all choices are scripted, but there are four books and each of them playable at $0 at this point, so they're worth trying at least once each!

### 22.2.2 Verdict

 **Combat Heroes** is an interesting experiment in analogue visual game design. As a solo game, it lacks AI, so you probably won't play it again and again, but it's a lot of fun once or twice as a solo game.

## 22.3 Lone Wolf Saga

While I was scouring the Internet in an effort to re-discover **Combat Heroes** , I stumbled upon Project Aon, a concentrated and thorough effort to preserve and revive the works of game designer Joe Dever (and collaborators Rob Adams, Paul Bonner, Gary Chalk, Melvyn Grant, Richard Hook, Peter Andrew Jones, Cyril Julien, Peter Lyon, Peter Parr, Graham Round, and Brian Williams). It turns out that Dever not only created the **Combat Heroes** game, but also a whole series of RPG books called the **Lone Wolf** saga, and they're all available for download! (As EPUBs, no less!)

The Lone Wolf books (and a few books set in a different universe, but using the same mechanics) are, essentially, an RPG mod for Choose-Your-Own-Adventure (or maybe the other way around; not sure which came first). That's an over-simplification, though, because each book is complex enough that you can progress through it at least twice and get unique progression, if not a wholly unique storyline.

The mechanics will feel familiar to any RPG player. You are the last of the Kai warriors, known as a the Lone Wolf; in the beginning of the book, you get stats (by using the random number table provided in the back of the book, but I just use a d6 or Python `random.randrange(0,10)`), you pick your weapons, and then start your journey.

The journey progresses much like a MUD or text adventure game; you navigate through paragraphs by following prompts. This obviously limits the story a lot; if you want to go North at an east-west crossroads, you can't, because there's no text for that. On the other hand, you can go east or west, and you can only do one of those things each play-through, so some replay-ability is inherent. If you don't believe me, then refer to Project Aon, which has provided proof in the form of charts to show just how divergent each book can become. Yes, they all share a common entry point and all successful paths eventually lead to a common ending, but since getting there is half the fun, you can rest assured that each Lone Wolf book has at least three or five main branches of play. There'll be repetition, obviously, but that's true of any game upon re-play; it's the variety that counts, and with over 20 books (with at least three story branches each) in the series, you have _plenty_ to work with.

Combat encounters use your stats; your enemy's stats are provided, and you battle in the usual RPG way, calculating hits and damage and so on. As with any analogue solo game, there's nothing to enforce anything you do, but I assume that you want to game and not treat the story as a less interactive Choose-Your-Own-Adventure book.

### 22.3.1 Problems

The only problem with the Lone Wolf saga is the same as any game with static content, analogue or otherwise; there are only a a finite amount of stories being told, and you can only explore that which the content creator thought to write down for you. It's no more a deal-breaker than a linear video game.

This highlights, for me, one of the strengths of the **Dark oCCult** game; the narrative comes from your head, so you _can_ explore anything in any direction. But even with **Dark oCCult** there is a finite set of cards, which eventually will get "old" no matter how you dress them up with new story ideas.

### 22.3.2 Mods

I can imagine, in a world where I didn't have a day job, writing expansions for some of these books. What if, at that east-west crossroad, a player _did_ go north? What would the player find? what if there were objects in each and every location that the player could pick up, examine, or use?

Alternately, there's the ultimate mod: write your own adventure, and form a community of other solo players who write these adventures, and trade amongst yourselves.

### 22.3.3 Verdict

The **Lone Wolf** and **Freeway Warrior** books are priceless. They're fun, inspiring, imaginative, and true to the RPG spirit.

## 22.4 Tunnels & Trolls and Dungeon Delvers

 **Dungeons & Dragons**, having been the first on the scene, at least in my mind, the standard by which any other game system is measured. **D &D** rules are familiar, multi-faceted, the basis for several video games, the progenitor of the Open Game License, and very very complete. Even if you've never played **D &D**, you're probably at least a little familiar with **D &D** rules whether you realise it or not. And if you're playing **D &D** and have a question about how the gameworld works, you can rest assured that there's an authoritative answer out there.

Then again, you don't get that kind of coverage without accepting that your Core rulebook is going to be hundreds of pages, plus all the extra modules you can dream up. **D &D** is complex.

Sometimes I like complexity when playing solo, because it provides me something to really sink my teeth into (see, for example, **Dungeoneer** ). There's a certain feeling you get after you've spent a week or two reading over the Core rulebook, and then designing your character, and then you sit down to play through a campaign. I think the technical term for the feeling is called "obsession", and if you that's what you want, and you can afford to invest that much time and thought into a game, then a properly complex system is perfect. However, if your aim is to just sit down and kill two hours without any real investment, then you might benefit from a simplified rule system.

Shortly after **Dungeons & Dragons** appeared on the scene, there came a game called **Tunnels & Trolls** which was, apparently, a deliberately simplified alternative RPG rule set. To this day, that does seem to be the main appeal of **T &T**: simpler rules and possibly a dash more whimsy. For whatever reason, it also _seems_ (to me, at least) that solo games are firmly and historically embedded into **T &T** culture, more-so than in **D &D**.

If you're familiar at all with **D &D** rules, then **Tunnels & Trolls** is a breeze to pick up. If you're new to RPG rule systems (or you've only ever encountered them through an RPG video game), then **Tunnels & Trolls** is a really nice and gentle introduction to all the typical concepts. **T &T** provides just enough stats and inventory management to keep you engaged, but not so much to turn the game into an exercise in accounting.

To a hardened **D &D** player, these rules may seem at first over-simplified (what do you mean there's no attack-of-opportunity?), but especially as a solo game, I think the simplified ruleset makes the game easier to sink into.

There are two ways to get **Tunnels & Trolls**. To get a feel for it, you can grab the abridged free edition that was given away on the first Free RPG Day, which is enough to get familiar with the basic rules, and includes a solo game for you to play.

Or you can purchase the latest rulebook as a PDF or a hardcopy from any book store with a good gaming section.

I started with the free rules, and expanded to the paperback almost immediately, because it's just that good. I say it's "just that good" because it really does have, for me, the right ratio of accounting and management to imaginative fun. I don't have an issue with more complex rules, like **Shadowrun** or **D &D**, but there's a place for simplification, too, and certainly there's always a place for _well and clearly written_ rules.

The ultimate appeal, though, is the proliferation of solo adventures written specifically for **T &T**. There's not really a go-to location for **T &T** content, so you'll have to look around the Net for them, and you'll probably have to settle for whatever format you happen to find. I prefer EPUB because it's a lightweight text-based format that I can read on anything (e-ink reader, computer, mobile) but these solo adventures come in every shape and size; some are PDFs generated from an office application, others are PDFs scanned from old publications, others are interactive websites, and so on. So you'll have to wade through the different offerings and find what works for you, or else be adaptable.

A few places to start your search:

  * drivethruRPG.com features a few $0 Trollzines that happen to bundle solo adventures in them, plus a whole (virtual) bookshelf of T&T paraphernalia to purchase.

  * freedungeons.com is a site with a lot of **T &T** content and some links to other sites offering T&T solo games. Much of this stuff pre-dates EPUB, and rather assumes that you are online looking for a good time.

  * Not surprisingly, the publisher of **T &T** themselves, Flying Buffalo have solo games for sale.

There are others, I'm sure, and it wouldn't be impossible to write your own and trade with friends, but those are the sources I know about.

### 22.4.1 Dungeon Delvers

The RPG culture has, at least as far as I could ever tell, long been based on sharing and modding. Like a lot of other casual bonds between community and corporation, though, questions eventually arose about just how much of a game the publishing company owned and how much the community that supported it could claim as theirs.

The Open Gaming License helped define some of that, and several games also get licensed as Creative Commons.

Both are great licenses. Depending on their exact terms, they, at the very least, ensure that a game can be self-contained unto itself.

With the most liberal Creative Commons license available, it's quite possible that you can buy (for $0, in some cases) a game and you always own that game _plus_ all the intellectual "dependencies" cascading from it. In other words, if I develop a solo game adventure for a Creative Commons gaming system and post it online, then anyone who downloads it not only owns the solo adventure, but also the game system it was written for, and the right to share those with your friends, and to modify and re-distribute them.

It's a somewhat esoteric distinction, because if you buy any rulebook and memorise the rules, you can teach other people how to play it, you can develop house rules, and you can invent your own adventures for it; looks and feels identical to Creative Commons. Then again, digital downloads of music looks and feels the same as recording songs off the radio or making a mix CD, but it's been drummed into people's heads that it's evil and sinful. The same could easily happen to tabletop gaming, if ever enough money got involved. It's nice to have licensing that explicitly identifies what players and content creators are expected to do with what they buy, and indeed whether or not they may in turn sell the work they do based on someone else's game system.

There are several games out there that release under Creative Commons, but one of my favourites is Dungeon Delvers by Brent Newhall. It's pretty close to **Tunnels & Trolls**. In fact, of the 8 character attributes in **Tunnels & Trolls** 7th Edition, six are also in **Dungeon Delvers** , so it's trivial to "port" a solo game written for **T &T** to **Dungeon Delvers** , if you either prefer a Creative Commons base or simply cannot afford a current **T &T** rulebook.

### 22.4.2 Problems

Solo adventures written for **Tunnels & Trolls** (or **Dungeon Delvers** or any other lightweight rule system) have the same "problem" as **Lone Wolf** or any scripted adventure: there's limited play-ability, especially in the mini adventures. You play it twice, maybe thrice, and then you shelve it.

This isn't a deal breaker, it's just something to keep in mind.

### 22.4.3 Mods

You could write your own adventures.

### 22.4.4 Verdict

An easy RPG system, it might be nice for new RPG players, and certainly it's nice for those times that you don't want to spend all your free time dealing with rules and exceptions and lots of stat management, especially since in a solo game you're just playing against a script and a die roll. **T &T** is a fun and whimsical system, good for any RPG player.

## 22.5 Non-Solo Games

Those are the RPGs I know about that are intended to be played, at least optionally, with a single-player. I'm sure there are many more out there, it's just a matter of finding them!

Another option available to the solitary player is to play a game intended for two (or more) players in some modified form such that it plays as a single-player game. I've tried this, to some degree, and did not enjoy it, but I know that some people do. I don't tend to do this, so I'll only discuss it broadly.

### 22.5.1 Mechanics and Game Design

It seems that some people can separate themselves from the thrill of being "in" the game, and just enjoy the game itself, from either a story-telling or a purely mechanical perspective. If you're one of those people, then you can probably adapt nearly any game to single-player; just play both sides and revel in the way the game progresses.

The great thing about an RPG, like the **D &D** board game, is that the game and even the story is only _part_ of the experience. The other half of the fun is the character progression and the score keeping, so if you imagine a really good party of adventurers, with different motivation and intriguing backstories, you can play the game and enjoy every last bit of the story, even over one or two re-plays, and just play the development of your characters. You'll have to play GM, all party members, and the enemies, but that's not all that different than what a novelist does during writing. I think there's a lot of potential for fun and creativity, as long as you approach the game as a story rather than approach it as just a game.

Since the RPG community tends to be so darned creative, there are plenty of pre-written campaigns and outlines for **D &D**, **Pathfinder** , and any other system you can find.

I do think that a card or board based structure is important for solo play, though; if it's a pure RPG (a rulebook and a campaign, and nothing more), then there's no structure imposed upon you, so there's no rhythm or, dare I say, ritual. You're just sitting alone in a room imagining fantasy battles, but your mind starts to wander, and your story structure just starts to fall apart. If you have cards or a board with tokens, then structure is imposed on your imagination, and things stay a little more balanced.

### 22.5.2 Cooperatives

Related to the idea of playing all party members, any game that concentrates on cooperative play is great for solo gaming. In such a game, the players work together against a built-in AI enemy, so condensing two or more players into one is a pretty natural option. I've seen the co-op game **Mansions of Madness** played in single-player mode and it works quite well.

### 22.5.3 Solitaire Mods

The first time I heard of "mods" for board games was in idle conversation with an experienced tabletop gamer; I asked if a specific game required more than two players, and he just shrugged and said "Yes, but just look up two player mods online", and blew my mind. The idea that games could just be "modded" by any random person online (much less _myself_ ) had never even occurred to me, and now I think it's one of the most persuasive reasons to play tabletop games; if something doesn't work for you, change it!

Turns out, if you want to play a game as a solitaire game, chances are someone else out there wanted to do the same, and probably posted their new rule ideas online. For instance, **Gloom** , which is not an RPG but does take inspiration from **Dark Cults** , can be played as a solitaire game, as detailed on boardgamegeek.com by a very clever user (and then boosted by other clever users with ideas of their own). Just with one internet search ("gloom solitaire", for the record), a simple 2-4 player game is transformed into an open-ended story-telling game for one player (and for the record, the story telling element doesn't come through on its own, so you have to make that deliberate, just as you do with **Dark oCCult** ).

An "RPG card game" called **Dungeoneer** (think **Dungeons & Dragons** combined with **Magic: The Gathering** ) has a very effective solo mod that'll keep you busy for at least 90 minutes per game. It doesn't enforce a story the way **Dark oCCult** does, but if you're pining for some classic stat accounting and dice combat, **Dungeoneer** will blow you away.

Those aren't the only games with solitaire rules from communities of players. The more you get used to the techniques of modding game mechanics, you might start developing the knack for it, yourself (for instance, by forcing positive points on characters the longer they stay in play, the Gloom mod gracefully develops an antagonistic AI that you must race against as you attempt to kill off each character; it's simple, yet brilliantly effective). So if you do get a knack for modding rules, give it a go, and post your results online so others can try your mods!

[EOF]

Made on Free Software.

# 23 The I-Told-You-So Unix System Layout

Lately some people have gotten up in arms about the complexity of the layout of the Unix filesystem. They say it's over-complicated, it smacks of archaicism, and surely it ought to be simplified.

For a while, I was cautiously receptive to this idea. I'm still partially receptive; I'm not sanctimonious about the filesystem layout, but I do feel that the gut reaction of _oh, that's too complicated_ is often borne of not having a personal use-case for all of its features.

Two things reminded me of why the layout of the unix system was such a smart design: I got an SSD drive, and I started designing multimedia workflows for people and places larger than, say, two or three friends making movies in their backyard.

> Just as an aside: if you're not using an SSD drive, you should get one. Now. It's 2016 (at the time of this writing), SSD drives are beyond blazingly fast, and you should have one.

The thing about the system layout of a Unix or unixy system is that it's modular. We UNIX folk beat the drum of modularity a lot, almost to the point that it loses meaning. We say "keep it modular" or "do one thing and do it well", which prompts the question "if you like modularity so much, why does the `ls` command have 57 flags available to it?", and the truth is that yes, sometimes modularity is difficult to nail down.

In this case, though, the lines are pretty clear. And that's the nice thing about the system layout: it softly delineates parts of your computer, one from the other, so that you have the finest-grained control over where your data is stored.

Why does this matter?

Admittedly, on a single-user system, it doesn't. Or rather, it didn't.

Let's back up: a long time ago, it mattered very much, because computers were multi-user, hard drive space was expensive, user permissions were significant, and system design was critical.

Then computers ended up on people's desk and laps, and hard drives got cheap, and all that data got normalised; throw it in a bucket, call it My Computer, and let the user sort it out.

But now we're all using SSD drives, because they're super fast, but also expensive. Suddenly it behooves us to re-think the harddrives-are-cheap mantra, because yes they are but the fast ones are decidedly _not_.

And throughout that history, the reality that BIG systems actually _do_ exist has not gone away. To a lot of us home computerists, that's not a part of our reality, but sometimes you start out as a home PC geek and end up a Linux consultant for multi-million dollar movies using multi-million dollar render farms, and you had better believe that a little bit of flexibility in such an environment goes a l-o-n-g way.

## 23.1 The Big UNIX I-Told-You-So

Here's the thing: this is exactly why Unix "greybeards" and Unix purists are so dang conservative. For a good 10 years, people had written off the need to ever think about storage ever again. Hard drives were bigger and cheaper than ever! Unix should get with the times. Who needs separate directories for system-dependent executables and non-essential executables? why bother having an `/opt` or a `/usr/local`? None of it means anything any more!

Well, first of all, that wasn't true even during the decade of cheap drives, because big mainframe-like systems persisted. Secondly, it all comes back around; with SSD, we're back where we started, and had we thrown out the structure designed 40 years ago then we'd be a lot worse off.

I'm not saying change is bad, but I am saying that arbitrary change should be avoided, and just because we don't see a use for something in our small bubble, it doesn't mean there is NO use for it anywhere. That's been my experience with a _lot_ in Linux and open source; one year, I'm shaking my head at some "pointless" project because "who even uses that any more?" and then the next year I'm working at a place where the very existence of my job relies on that project. That's a slight exaggeration, but there have been examples where that's very nearly true. The point is, the world is a big and diverse place, and just because the one or five or 100 computers you interact with on a daily basis are configured one way, and the one or five or 100 users you interact with see the world in one specific way, I guarantee you there's another institution, company, or country that sees things completely differently.

And that's why, sometimes, when someone staunchly puts their foot down and says "I don't want to do it that way, because I've always done it _this_ way", they aren't just clinging to tradition or succumbing to their own force of habit. Sometimes, there are reasons behind the conservatism, and it's the _right_ thing to do.

## 23.2 Proof

Wait, so what's this got to do with system layout, anyway?

Right. My point, I forgot to make my point.

The thing about the layout of a Unix system is that it is modular. It splits up all kinds of information and throws it all over the place on a system. Is that confusing? not if you understand it. Is it annoying? sure, sometimes, but if you anticipate it, then you can actually override it if you want.

And yet I speak of it as a good thing.

Well, take an SSD drive. It's fast, it's small, and it's expensive. You don't want to waste space on your ultra-fast drive with silly things like family vacation photos; you want to put the _important_ stuff on there; the stuff that your computer needs to boot up, to launch applications, to run code. You want that to be fast.

So you split up your installs, such that executable code and libraries get installed to the super fast directories, with all the extraneous assets and documentation files get installed to slow directories. In this way, you get the performance of SSD where it matters (boot time, launching common desktop applications and utilities), but you conserve the available space and lifespan.

I do exactly this on my main workstation; feel free to read up on the technical bits about that at your leisure.

The same principle applies to big installs. If you're managing a massively multi-user system, one of the things you inevitably need is the ability to easily and predictably determine the install destinations of various kinds of data. Do your docs need to live on the same high-priority drive as your production application drives? should your user data exist on the same drives? can you even afford to keep your user data in the same data center or does your backup scheme require a data center run by robots (I'm actually not kidding about that).

## 23.3 Too Many Files?

The perceived problem with the way Linux manages its applications is that the user doesn't, then, know where all the little bits and pieces of one application end up; you have binaries in one place, vital libraries in another, documentation someplace else, and icons and other assets in still another. How can anyone ever hope to re-assemble an installed application in order to take it to another computer, or uninstall it?

Well, how do you keep track of your weekly grocery requirements? Do you write yourself notes about what ingredients you've run out of, and then hide the notes, each in different places around the house? If so, you'll be pleased to know that there's a better way: make a centralised list. Keep the list in one place, at all times, and jot down what you need on that list. When you go shopping, take that list and use it as your master reference.

This, of course, translates to installing applications on Linux, except that you don't have to do it manually. Computers exist, and so they can (and should) do it for you.

On Slackware, a master list of all packages exists in `/var/log/packages`; each file there represents one package, and it contains a full list of every file installed in that package.

On a Red Hat system, all installed packages are listed with `rpm -qa`, and the command `rpm -qpil foo.rpm` provides a full list of every file contained in the package.

Under normal circumstances, you'd just use RPM on DNF or pkgtools or whatever to remove a package, or to install a package. But if you are trying to "extract" an application from one computer to take it to another, as long as you have a list of the files, the actual work is a fairly simple `find` command away.

But wouldn't it be so much easier if we just kept all the files together?

Sure! But there's a downside. I found a random Mac sitting around the office and ran `du -h` on its `/Applications` directory. The result? About 10GB.

That's 10GB for the userland apps.

On my decked-out, install-every-app-I-can-find Slackware machine, the sum total of my `/usr/bin` directory is not even 1GB. But my binaries aren't self-sufficient, so let's add in /usr/lib64. Much heftier; about 6GB.

Both of those, I have installed on my 16GB SSD drive. That means that my SSD drive is not yet half full, and I've installed everything I will ever need, and then some. The random Mac I found around the office would have eaten most of my SSD drive, and that's just from having a few office-type applications installed. I guarantee if you made me use it as my own machine, the application directory would double from the kind of apps I install (well, no it wouldn't, because I'd install Linux, but I digress).

The point I'm making here isn't that Linux applications are smaller, because they aren't. The point is that they're better distributed. I didn't even bother with `du -h` against the directories containing all the assets (icons, widget images, splash screen graphics, and so on) or the documentation, because on Linux I don't have to keep them on my SSD drive (and so I don't).

Can you hack around it on a Mac? of course you can. Does Windows manage apps modularly? No idea. I'm just saying that modularity is powerful, and the fact that Linux embraces it is, as long as you know how to manage it, a **Good Thing** because you can leverage it when you need to.

## 23.4 Keep it Modular

Yes, a modular system layout is a big deal. It's an important part of well-designed computing, and understanding it is important if you're going to speak up and call for revisions. And I say that unless you've sat down and custom-compiled your applications, struggling with `cmake` flags and a fair number of sed hacks, to get your doc stack and icons and fonts to go to a lesser drive, then you don't understand why the system layout of Unix is complex, and so you're probably in no place to call for its overhaul.

That's why the unix system is laid out the way it's laid out. Now that you know, you can perhaps appreciate it a little more, and whether or not you "get it" or not, you certainly can see how many people find it very useful and very important.

[EOF]

Made on Free Software.

# 24 Linux is not an App

Linux is a kernel, and I don't say that lightly, because it matters. But in the vulgate it's an "operating system", too, so that's the term I'll use here.

For a very long time, Linux has been either discounted or ignored by vendors of other operating systems; you can attribute that to confidence or fear, as you please. Lately, there have been some shifts in how competitors view "Linux" (both the OS and the esoteric "force" that it represents), and one tactic that I perceive is the peculiar and subtle suggestion that Linux is, essentially, _just an app_.

Usually I would interject the aside "not literally", but in this case I do feel that it's the **literal** goal of some companies to create the impression upon people that Linux really is Just An App. It's as if to say, "It may seem strange that we're raving about Linux in our marketing materials now in spite of all those mean things we said about it in the early 2000s, but the reason is this: Linux isn't a reliable OS and should not be used instead of _our_ product. Linux is just an app, and it's OK to use it here and there, as long as you're running it from within our product."

## 24.1 Linux as a Software Appliance

Linux was already pretty "unavoidable" in the server space; realistically, you weren't going to work around large server deployments and not encounter UNIX (often in the form of Linux, specifically). Microsoft and Apple both knew that, and they each carved a niche for themselves in home and small business server spaces.

When Microsoft decided to change its tactics from all-out war against Linux to sudden-colleague, they positioned it as a kind auto-correct mistake on all past statements. They didn't go so far as to acknowledge the monopolistic business tactics and outright corporate sabotage, but they very suddenly took a new stance: they love (not "like", but _love_ ) open source. Microsoft decided, overnight, that they could magically transform from competitor to colleague.

How did this change in heart and stature manifest itself? by making Microsoft more compatible with Linux than ever before. You can now run Linux on a your Microsoft "cloud", and you can run "Linux" applications via a compatibility stack on Windows. Can you do the same with "Windows" applications? well, yes, technically that's been possible for years upon years, but only because independent hackers have sufficiently reverse-engineered how they work. Is Microsoft now going to help with that effort? no, of course not.

To make sure it's straight, this is the state of play:

  * Microsoft has enabled you to run open source Linux-centric software on Windows
  * Microsoft has enabled you to run open source Linux instances on Azure
  * Microsoft has not provided any support to help you run Windows-centric software on Linux
  * Microsoft has provided support to help you emulate the Windows OS on Linux

If this seems like a one-way street, that's because it is. Microsoft is supporting any effort that results in the sale of a licensed copy of Windows or Azure, whether it's running on bare metal or in a virtual environment; they don't care. If you're only in it for one or two applications, they aren't interested in giving you anything. Microsoft gains most of what Linux has (on a superficial, but functional, level) without having to give up anything that they didn't have before. It's a no-brainer, and a little surprising that they didn't think of it earlier.

And in fact, what I'm really driving at is that Linux loses a little something. Instead of being identified as an operating system, Linux becomes, in this model, an application that you download and install and run on top of Windows. Linux isn't an alternative operating system; to a born and bred Windows user, this new exciting Linux thing is just a quirky application, much like Visual Studio but with a built in web server.

But who cares, right? Whether we call Linux an "operating system" or whether we just treat it as an appliance you run as a `.exe`, the result is the same. In fact, in many ways, the functional result is _better_ than what you'd have if you're stuck on Windows: you can still use Linux even if you're not allowed by your sys admin to install Linux on your hardware. You get all the benefits of Linux without any of the "burden". Microsoft, one of the world's largest monopolies, granting its blessing upon Linux should, in theory, help "spread" Linux in some very real and practical ways.

Let's come back to this quandary later. First, let's look at another way Linux has lately been relegated to application-level duties.

## 24.2 Embedded Linux

Linux has long been a key tool in a hacker's, or hobbyist's, toolkit; it's open from the ground up, allowing a user to learn everything involved in making a computer run. It's no exaggeration to say that just booting into Linux a few times can, if you think about what you are doing and how it is happening, can improve your understanding of computers. In fact, I don't even think it's exaggerating to say that just _hearing_ about Linux can improve someone's understanding of computers; before I knew Linux existed, it literally hadn't even crossed my mind that a computer could be purchased without an OS, and that an OS from a third party could be used (I didn't understand, at the time, that Microsoft itself was, at least legally, a third party).

Because it is so suited as a hobbyist's OS, when the "maker" fad hit its full swing with the rise of ultra-cheap SOC boards, Linux was (and is) the _de facto_ OS in use. This came as no small surprise, clearly; the Raspberry Pi (although the Pi foundation has made it clear that they are no ideological friend to Linux) sells out consistently upon each new board's release, copycat boards proliferate, and Microsoft scrambled to mitigate the craze. They eventually did: they released a Windows 10 "core" installer that turns the Pi into a sort of .Net Arduino, a slave device to be programmed through a Windows master.

Before Microsoft got their code onto the Pi, though, there were a number of tutorials from some large vendors (not Microsoft, that I ever saw, but big places, like Autodesk in the guise of Make™) out there that instructed users on how to use the Pi exclusively as a slave device. It all but ignored that Linux was an operating system that you could just use, that the Pi doesn't _need_ a host because it is its own host. The Pi and SOC boards like it are not broken-by-design cell phones and mobile tablets, where you need to connect them to a host computer to do anything useful with them, including program for them. It's a flat-out dangerous concept: a computer that cannot itself be programmed. That's fine for the industries creating new chips and boards but for the end user, it's the very idea of closed source, blackbox, applianceware, and it flies in the face of what a "maker" (I'm using the nearly trademarked term somewhat sarcastically here) "movement" (also somewhat sarcastic, given the huge push from corporate monopolies drives this market) should be all about. If you can't sit down at your "maker" workbench and hack on the thing you're using, then just what kind of maker are you, exactly?

In a sense, it's kind of nice that Windows "core" got released for the Pi and the Minnowboard; at least in that model, you're acknowledging that the platform is closed and dumb, rather than blissfully ignoring the open source foundation that you are building upon.

## 24.3 Linux and Independence

There are several issues here, and they all revolve around one another, sometimes wandering off into pragmatism, other times flirting with ideology, but all of them firmly rooted in tech.

It's no secret that I'm a Unix and Linux enthusiast, to the point that I use nothing else. I'm open to something else, as long as it's open source and meets my requirements. But at the time of this writing, the only thing that consistently meets my computing requirements is Linux and, to a lesser degree (in terms of multimedia flexibility), BSD. As such, I have an interest in promoting unix and open source above all else, so on one hand I have a "bias", although on the other hand, my "bias" is sincere; I am interested in the large scale success of open source because it works best for me, it enabled me to get into technology even though I had no financial business engaging with such an industry, and it likewise enables others to be successful.

And that's actually my first problem with this situation: technology is meant to improve quality of life. If that's not what technology does, then it's no longer useful to us and, in fact, potentially harmful. As such, technology should be accessible to us all, regardless of anything like class or status. I'm not saying everyone needs to know how it all works, but the ability to learn ought to be there, with no barriers. Same as anything else, like the ability to grow food for one's family, or to build a shelter, purify water, breath clean air. If we're saying that something improves quality of life, it must then be as "natural" as everything else that sustains life. Otherwise, it's not improving life, it's yet another achievement in life that you must work toward: a luxury.

Microsoft and Apple could be called luxury items. I don't think they are very "luxurious" in the sense that I think an extended weekend in the Alps is luxurious, but we might call them _bourgeois_. They're restricted. And because they're restricted, they discourage learning and understanding. You can't ever really understand or know Windows or OS X, because there are major components that are kept out of reach from you. And if de-compile and reverse engineer, then there are legal consequences to remind you that too much knowledge is just not allowed.

I guess more than the literal closed-off source code, it's the very superficial _legal_ limits that speaks volumes. There's no reason you can't understand computers, it's purely a man-made blockade between you and knowledge that others, employed by Microsoft or Apple, obviously do have access to. I mean, we're not debating why I can't "see" a molecule; I can't see a molecule because I don't own a microscope with sufficient power to show me that level of detail. Code is different; it inherently has no barrier, at least not to see. I'm not proposing that Microsoft or Apple distributes a free CPU along with the code they write, but I am saying that if we're seeking to further the collective knowledge of humanity, then we should share what can be shared freely. And if we don't, then can we really claim that we're in technology to further our quality of life, or are we actually just luxury items?

Don't get me wrong; if you really want to be nothing but a luxury item for the bourgeois, that's fine; it's a free world, you can be as selfish as you want. But I prefer honesty. I'd rather Microsoft and Apple advertise their true intentions, rather than claim in marketing material that they're here to empower and elevate and educate.

What this boils down to is that Linux is an alternative to frivolous technology. It's a technology that can be both (you can, after all, run closed source software on Linux) but in the context of empowering users, Linux and open source has the market cornered. But, as has been said, the medium is the message, and to present Linux as an add-on to Microsoft threatens to obliterate one of the main points of Linux: independence.

With Linux, all you need _is_ Linux. It's an independent, free-standing, complete solution. You don't need a license, you don't need permission to use it, you don't even need to purchase anything (if you're good at dumpster diving) aside from electricity. You don't need to opt into a special Microsoft club to use Linux.

## 24.4 Linux is Efficient

From a purely technological point of view, treating Linux as an application is grossly inefficient. There's a pretty simple, and very apropos, analogy to be made: if you need a micro-controller to trigger motors and read sensors, then the [open source] Arduino is a great choice, but if you need a micro-controller and a web server, then probably a Linux-based SOC is more sensible.

Same holds true for Windows users. If you've just bought a Raspberry Pi, it makes no sense to interface with that Pi through another OS. Because the Pi has an OS on it already. If you meant to buy a micro-controller slave, then trade the Pi in for an Arduino, or (this is less than ideal in terms of staying open, of course, but if you don't care...) just install Windows "core" on the Pi and let it be a slave. Don't add an entire OS to a stack that already has an OS.

Furthermore, a Pi _should_ have an OS (so don't install Windows "core" after all). It's a complete computer that can do some of the jobs of a micro-controller, but it's not a micro-controller. It has an OS, it's designed to be driven by an OS, and by the sheer grace of God, no entity has stupidly intervened to break the Pi's OS such that it no longer functions as a self-standing solution. If you buy a Pi, you _have purchased a computer_.

Now, understand, if you've bought a cell phone, you have _also_ purchased a computer, but unfortunately the vendor has broken your computer such that it can only be used as a slave device; it's sort of a moderately intelligent thumbdrive or flatbed scanner without the flatbed. You have to install "dev kits" to program for a mobile. You're obligated to plug it into some other device in order to modify how it operates (to the extent that you're even permitted to do so). It's an abomination of technology.

Don't turn your SOC into that.

I'm not speaking as a hobbyist, here, but in the interest of consumer rights. If you accept that a device that is fully capable of doing 100 things can be sold with 90 of those things arbitrarily cordoned off from you, then you're helping create a false economy driven by nothing but the checkbooks of the corporations producing the products. You don't want to be a consumer where value is determined by the people selling the product.

Linux is true to itself both technologically and economically. Value is derived from user requirements. There's room for artificial economy and bias just as there's room for closed source on top of the open foundation that Linux lays down, but one should come before the other.

## 24.5 Linux is Open

You can run closed source applications on Linux, and now you can run Linux on closed source, but don't be deceived: Linux is not an application. It's not just an open source application for a closed source environment. Linux is an open environment from the ground up.

The difference seems subtle, especially if you're just a code (or office, or data entry, or whatever) monkey stuck on Windows at work. In that case, Linux on top of closed source is indeed a blessing, because you can work in Linux even within the walls of OS restrictions. Seems great, as long as you ignore the fact that you have had to acquire "permission" to do something as simple as use open source, but it's a day job and it pays the bills. I get it.

I get it, at least to the point of necessary evil. But in real life, we can't relegate Linux to an `exe`. Linux is open, and if you hit the ground and find that you can't dig any deeper, then you're not running on open source, and that's not something we should settle for in an advanced, knowledge-driven society. The obvious problem is that we're not really an advanced knowledge-driven society, but if we agree for a moment that we're aiming for that, then I think we can agree that open source is the solution that enables education with no artificial barrier. Why do we want to place an artificial barrier around this?

Well, we don't. If you can use Linux as a free-standing open source system, that's the ideal, and it's the message we want to send to people curious about technology. It's independent, and it's completely open.

Keep it that way by using it that way.

[EOF]

Made on Free Software.

# 25 Alternative

The word "alternative" is one of those shifty terms, with a definition that changes depending on perspective. For instance, something that's "alternative" to one person is the norm for another. Generally, the term "alternative" is considered to be defined by the fact that it is not considered to be in the majority or the mainstream.

Then again, sometimes the term "alternative" gets attached to the second instance of something. If a web server, like Apache, exists, then any time a different web server gets mentioned, it gets the _alternative_ badge, because we all assume that we all silently concede that whatever it is, it's an alternative to that big one that we all know about.

## 25.1 Problems of Persistence

These thoughts occurred to me the other night, while I was tracking down a bug in some simple animation software I wrote. In this software, a user clicks a frame in the timeline and that frame gets an overlay icon or badge to mark it as the current selection. If a user clicked the frame again, we assume that the user is toggling the selection off, so the badge gets removed. Pretty obvious, typical UI.

 Click on, click off.

The problem was that if a user tried to select the _same_ frame again to re-select it, the frame would refuse to be selected because it already believed itself to be the active selection. The problem was solved pretty easily by some rudimentary garbage collection (although the larger problem is tat the application needs a more robust selection library, but I digress), but it dawned on me that this issue was similar to what we, as a community of computer users, experience when we speak about applications.

Whether it's the first on the scene, or the one that is best marketed, or the one that gets adopted by a majority of influential companies, we computerists very often award a badge to one application early on, when it's fresh. There's an implication that that software earned that badge by merit. And as that software grows and develops, it gets to keep that badge.

The "badge" we give it is the right to be The One to which anything else is an alternative. We do it with open source projects and closed source projects alike. We assign this invisible and silent Seal of Authenticity without any RFC, without debate or survey. Sometimes the badge is, if only by default, accurate; if there really is no other application like it, then it's hard to argue against referring to a software that comes later as an "alternative".

The problem is, there doesn't seem to be a requisite renewal period for these badges that we unwittingly hand out on a first-come-first-served basis. We give our Seal of Authenticity to whatever makes the biggest (or only) splash at some point, and it becomes not just the standard in its class, but it becomes the specification for everything following. You can't make a word processor at this point without it being compared to Microsoft Word. It seems _verboten_ to propose that Word is an insufficient measure of efficient word processing power, but for better or for worse, it got the badge and there's been no garbage collection to clear out memory addresses in order to allow for a second badge, or a new badge altogether.

There have been exceptions to this, of course; sometimes big popular applications finally fall out of favour, but more often than not, the computing public has an unnervingly long-term memory for its definitions list. You can rattle off general application "types", and most people, Rorschach style, have a brand name associated with it.

  * Office: Microsoft.
  * Photo: Adobe.
  * Video: Apple.
  * Server: Linux.

Is it really so clear, so obvious? or are we just being trite?

## 25.2 Problems of Scope

In programming and many other industries, there's a concept of scope, which defines the space in which something is true. In one function of an application, I might assign one value to a variable, but I only need that value within one function, so I make the variable "local"; it's valid for this function, but another function knows nothing about it.

As it turns out, this is yet another great analogy for how we computer users define "alternative" software. It can be a little mind-blowing, but different people need different things from their computers, to the point that it may never even occur to someone that some software not only exists but is the very lynch pin of an entire industry. Certainly, as an employee of the visual effects industry, my definition of "obvious" _de facto_ applications differs greatly from someone who manages, say, construction material durability requirements, or even from someone who teaches the basics of video production to children.

The general computing public rarely acknowledges this, I suspect because of marketing, mostly. It's not in the interest, however disingenuous, for software ads to acknowledge that there are competitors or _alternatives_. Every software trying to sell itself is obligated to pretend that it's the only REAL solution available; nothing else compares, but if you do find something else, then you must compare it to THIS software, because this one's the real one. It's the one that got the seal, the badge.

And, strangely, it seems that outside of your own computing scope, your standard application becomes niche. You can sit down with your friends at the café and tell them how great this software is, but if it didn't get **the badge** within their scope of computing, then you may as well be speaking Greek without UTF-8.

## 25.3 Reclaiming the Term "Alternative"

The requirements of getting the badge that makes all other software an "alternative" are pretty fuzzy. We're not really sure if it's first-come-first-serve or whether it's market-share or brain-share (or how we measure brain-share). While those measurements do feel like obvious choices, it seems odd that _availability_ rarely enters the equation.

Certainly in my own life, the natural barrier to entry to most everything I do, both professionally and as a hobby, has been a trial of acquisition. I only managed to get into audio production because Audacity existed and was $0 to use. It was available, regardless of my financial state (which, as a college student, was not good at the time). Ffmpeg single-handedly got me paid employment in the media industry, and I was able to learn and use it because it was available and cost $0 to use. The list goes on.

I realised some time ago that I live in an open source world. We all do, because open source drives so much of computing these days, but I mean that the way I compute is with open source at both the bottom and top of my stack; I use open source in my networking , I use an open source kernel to drive physical hardware, and I use open source applications at work and at home. To a degree, I live in a bubble, but it's a bubble that I consciously built and it serves me well. So the question is, if the "alternative" is my everyday computing experience, why should I still define it as "alternative"? Surely my way of life is not "alternative" from my perspective.

OK, so "alternative" is a malleable term. But it's bigger than that. It's not just a question of life with the Munsters, it's a question of who's allowed in. With open source, there's no exclusion; even in the worst case where you feel unwelcome by some community that is building an open source application, you _still_ have access to the code. The barrier to entry is your own resolve to learn a new application, and nothing more.

And that ought to be the standard, no matter what. My Rorschachian responses to application types default to open source, with the "alternatives" being the ones that you might choose to use if, for whatever reason, you find the ones available to everyone insufficient.

  * Office: LibreOffice
  * Photo: GIMP
  * Video: Kdenlive
  * OS: Slackware

The list goes on and on. You define your own "alternatives", but my mainstream day-to-day tools are not alternatives. They're the ones that gets _my_ seal of authenticity, and they're open to everyone.

[EOF]

Made on Free Software.

# 26 Source RPMs

The benefit of running Red Hat or CentOS or Scientific Linux as a desktop is that you get a great Linux distribution with long-lasting support and a stable life cycle.

The disadvantage is that you don't have nearly as many installable packages to choose from. I'm talking, a mere sub-set. You can install the EPEL repository and try to limp by, but eventually you'll hit a wall, trust me.

So, you learn to build your own RPMs, but not from scratch! You can grab perfectly well-formed _source_ RPMs from Fedora, re-build them for your machine, and install. It's three steps.

  1. Install the RPM development toolchain.

        # yum grouplist -v
    # yum groupinstall fedora-packager

  2. Find the package you want to install in either the Fedora package database at apps.fedoraproject.org/packages or in the RPMFusion repositories: download1.rpmfusion.org/free/fedora/releases and download1.rpmfusion.org/nonfree/fedora/releases.

Download the source RPM you want. It needs to be a _source RPM_ ; otherwise, yum will complain that it's not an RPM meant for your system. Source RPMs generally use the `src.rpm` label in the file name.

  3. Rebuild the SRPM:

        $ rpmbuild --rebuild rare-nonfree-package.fc24.src.rpm

  4. Rebuilt RPMs end up in your user's `~/rpmbuild/RPMs/` directory, in the appropriate architecture:

        $ sudo yum install rpmbuild/RPMS/`uname -m`/rare-nonfree-package.centos7.rpm

OK, I lied, it was four steps (but only three need repeating after you have the first step complete).

Either way, pretty easy.

[EOF]

Made on Free Software.

# 27 Voluntary Paywall

It's a modern world, we have modern conveniences, and we have been conditioned that everything is available for a price. That's one of those "facts" of life that we grow up with and that we learn to both love and hate. After all, it's socially unjust that anything is possible for wealthy people but withheld from normal folk. And yet there is some comfort there, isn't there? Anything is possible! if we just save up enough money, or else wait until someone rich happens to want the same thing as we do, it might be purchased into reality. Money brings us hope and comfort. Money can make all things possible.

Maybe that's why it's so scary to us when we come across someone or something for whom money holds no appeal. Seriously, go hunt down someone who does not want your money. It's like staring into the cold black eyes of someone who has no soul. They are impossible to reach. It's unsettling to modern man. How can someone refuse money? How can my money not force something into going _my_ way?

This is a big topic, and one that should be considered from many angles. Socially, it's a very powerful thing, and it's why governments are less concerned about money (an imaginary construct) and more about land ownership; everyone needs land because everyone has to exist someplace. You take land away, and then you start to get the reactions out of people you really want to provoke. It's a very effective strategy.

But let's look at this concept as it applies to software and open source.

## 27.1 Software

Sometimes I observe people using non-open source software. It's interesting to see how they deal with problems.

First, there are the work-arounds; the things that take effort but are still somehow more convenient. You know the kind; we all do them. Can't seem to make the printer cooperate? well, we'll export to PDF, put the file on a thumbdrive, and walk it over to the printer to print straight off the stick. Whatever. The problems are small and mysterious, the work-arounds are stupid and technically less convenient than finding the actual fix, but it fits into our rhythm and so we adapt.

Then there's the usual internet search and investigation phase. The workaround is becoming problematic or annoying, we really want to get this fixed, so we look online to see what other people are doing.

And finally, there's the payment stage. We couldn't fix it ourselves, so we pay someone else to make the problem go away. Sometimes that means we buy better software, or a better version of the same software so we can unlock otherwise forbidden features. Other times it means paying some high school kid from the local computer store to figure out the issue for us. It doesn't really matter how the problem gets solved, as long as we can put money down on the counter and have the problem go away. That's what we're after, because that's what we've been trained to expect.

Modern technology makes everything possible. You just have to keep feeding it money.

The impression most of us have, deep down, is that nothing is impossible. Technically, anything is possible in this world (because of technology!), it's just that some features are locked behind a paywall. And we're OK with that. Want to send emails in bulk without getting throttled by your ISP? pay a little extra and you can! Want to get faster internet? upgrade your plan and it shall be granted. Heck, we even see diseases as a pay-to-play scheme; have an incurable disease? pay extra to unlock the cure.

Bringing it back to software, the result of this mindset is that if we have a problem that we cannot solve, we can pay for it to be solved for us.

The problem with that is that _somebody_ had to come up with the solutions we're paying for. We're an advanced civilisation, so surely we don't actually believe that the **Buy Now** button in our web browser is actually auto-generating, from the depths of the Abyss, the answer to our problem, on-demand.

Right?...we don't actually think that, do we?

Sometimes I wonder.

## 27.2 Open Source

Since so many computer users are trained to believe that everything is available just around the corner of the next paywall, I notice that open source can be unsettling to people.

With open source, a lot of the problems (past basic configuration and trouble-shooting) are problems that can't be ushered away with a payment plan. Believe me, we've tried it, and it doesn't end well, because it's _open source_. There isn't one definitive set of problems that everyone in the world working on open source can sit down to solve; we each have our own list of things we need done, and it's not money that's the problem. The problem is that a solution hasn't been invented yet, but that's what we're working on, just as quickly as we find them.

And by "we", I mean _each individual_.

You see, a lot of people seem to have an illusion that **Open Source** is the name of a company. A place where we all go each day and sit at our desks, or we remote-in because hey that's what high-tech people do, and we have meetings, and things are mapped out and things get done. That does happen in some places, but if we see open source like that, we're looking at it backwards. You see, open source isn't something _you_ go to in order to join, open source is a label that gets stuck onto you _post facto_. If I make some software for myself or for someone I'm helping, and I post its source code online, then when the Internet gnomes come round at night, they put a label on it, declaring it Open Source. If I add a license explicitly stating that you may use it and change it and share it, then it gets an upgraded label that says it's liberated software; it's software that I gift to you with the promise that I can never take it back from you. You may use it and keep a copy of it forever.

To do that, I didn't have to sign up with anybody, I didn't have to fill out any registration forms or log my time or submit my code for approval. I just wrote some stuff and made it available to others.

Now, if you come round and offer to pay me to make my code better, I may or may not agree. First of all, I might not agree that what you want is "better". Secondly, I may not agree that the pay you are offering me is worth the effort. Thirdly, I might not want to get involved in something that brings along with it a new dimension of expectation; if you _pay_ me for something, surely that thing is subject to your approval.

And that's unsettling to some people. It's the classic bit: "what do you MEAN my money's no good here? DO YOU KNOW WHO I AM?!"

## 27.3 Open for Business

The obvious sympathetic question is why anyone would ever want to use a product in which there is no recourse when something goes wrong. In fact, how can open source be taken seriously at all if there's no way for its customer base to call for features and improvements?

Well, that's the thing about open source, though, isn't it? with open source, just as anyone can choose to _not_ be swayed by money, anyone else can choose to work for nothing _but_ money. Open source is liberated software; if you want to pay someone to hack on it and add features, _you can_. If you want to rally up a group of people who want the same features, you can all pitch in to pay for those features to appear.

The difference is that the driving force comes from the need for something, while in the closed, corporate model software is churned out and customers build their workflows based upon it. It's a little like the paywall itself; demand-driven development requires effort from users, who must figure out what they want, and that's a tall order, especially since we're all being trained on a daily basis that _adapting_ is the way we are supposed to work. You give me something, and I'll work around it: that's how non-open software development works. When I want something better, I pay extra.

We can work with this. It's predictable, it's secure, there are clear definitions on what our rights as users are. As long as we pay, we get supplied with infrastructure that might not be exactly what we want or need, but it's close enough that we can adapt. If we need something different, we pay extra. If we need something that simply does not exist, then and only then, we get up from behind our desks.

And we make.

Something.

Happen.

That's open source. It's the act of getting up from behind your desk. It's not as comfy as the corporate version of life. It takes thought and effort, and sometimes you'd rather be doing something else. But when the money runs out, or the money simply cannot buy what you need, it's the principles of open source that's going to come to the rescue.

Because open source is production for use, not for profit.

[EOF]

Made on Free Software.

# 28 Proposal for Distributionism

The tradition of Linux distribution goes a little something like this:

In the beginning, you downloaded the kernel, compiled it, put it on a hard drive, and then started adding applications that would make the computer do useful things. Basically, Linux from Scratch.

Very quickly, somebody decided to collect the common components on some disks and distribute them as a collection or, as we say, a "distribution".

And then, just as predictably, somebody else decided that those applications could be distributed _better_ , and so they also distributed them.

As any tried-and-true Linux user will tell you, distributions are a good thing. They demonstrate that Linux is a healthy and open environment where people can take work that someone has done, change it and add to it, and re-distribute it. If we, as users, lose that, then we'll lose something integral to Linux.

But I see a little loophole here, and that's in the "change it and add to it" clause. If we're honest, a lot of distributions out there don't actually add that much to what they're re-distributing; they might add a few custom scripts, or maybe even a whole new desktop environment, but generally they're re-distributions of distributions.

Don't get me wrong: I don't think there's a problem with that. Re-distributing a "remix" of something is a powerful feature of Linux, so I don't fault anyone for it, and I don't think it needs to stop.

Then again, I do wonder if there's an alternative that we should be looking at, when appropriate.

## 28.1 Mods

I came to Linux partly because I wanted to find a new interface into my computer. I was finding the desktop model clunky, and certainly I'd completely outgrown the one desktop made available to me by my closed source vendor. I had supplemented it as much as I could, but it just wasn't working for me any more.

As part of my initial exploration into the concept of modding my computing experience, I came across a lot of online videos by guys doing case mods. They'd build custom towers, or they'd take existing computer cases and take drills and dremels to them to make something new. I always thought this was cool, and once I got into Linux proper, I took inspiration from these hardware modders when designing my own interfaces.

Conveniently, there's an analogy to be made here.

  * Hardware mods are creative.

  * They build upon existing material.

  * And they don't require you to swap out your computer for a new one; they create a new one with the thing already sitting on your desk.

Now, read all those points again. Carefully.

And apply it to software distribution.

Linux is a _smart_ technology, so if you're distributing a collection of applications and the Linux kernel, which would you rather be able to say:

"It's easy! just back up your existing install, re-format your drive, install my distribution, restore your data, and you're done!"

or...

"It's easy! download and run my setup script."

Additions to a system may justify, but rarely should _require_ a user to download Yet Another ISO and a full re-install. I understand that that's the "easy" delivery method from a dev point of view, but that doesn't make it right for the user.

As Linux users and supporters, we should strive to keep the technology streamlined, smooth, and simple. If that means we need to develop easy ways to distribute _components_ of a distribution rather than an _entirely separate_ distribution, then we should do that. The fact is, there are only a handful of distributions with their own unique infrastructure. Canonical's own controversial move to bar other distributions from using Ubuntu infrastructure and resources is a prime example of this; derivative distributions coast on the foundations of their parents. It's pretty common.

I feel that there's a point at which we, as users with ideas to contribute, should stop and ask ourselves how we want to contribute them for others to use. Do we need to spin an ISO? Do we need a website and a forum and a mission statement about how our distribution of Linux will bring people to Linux like no other distribution can?

Or can we publish a set of packages that build upon the core of something already widely available? The steps could be, instead, 1) be running SUSE and then 2) run the Haxx0r Edition install scripts. Suddenly, you've got you obligatory pen-testing edition of your favourite distro, complete with your custom-designed desktop environment, your icon set, your fancy hazmat logo, and all the additional tools you want your users to have.

It's a custom distro without the distro.

## 28.2 Proof of Concepts

I use Slackware, and I have, for a long time, worked in industries surrounding the "film industry". As a result of the latter, I get a lot of requests from clients to make specific things work on the former. Sometimes these requests are pretty obvious; get a good video editing workflow going, or setup a recording environment, or whatever. These requests aren't specific to Slackware, it's just a general setup request. Other times, there are requests to setup something that, as far as I can tell, no one else has implemented, like setting up an ACES-compliant workflow, or coming up with a stop animation solution, and so on.

To document the solution to these requests, I set up a site called Slackermedia. It's not a fancy site, and it doesn't offer a whole lot, except a complete outline of how to create a reliable and robust multimedia studio infrastructure based on Slackware Linux.

The parent distribution, obviously, is Slackware. I rely entirely upon Slackware's maintainer, upon Slackware's servers and mirrors, and upon Slackware itself. But that's not all. I also rely upon the Slackbuilds.org community and maintainers to provide 92% of the "extra" packages that Slackermedia instructs a user to install to achieve a flexible and complete multimedia studio OS.

Slackermedia doesn't provide a downloadable ISO. It doesn't provide packages. It's just text.

For that reason, I call Slackermedia a "distro-from-text".

Because it's Slackware, I expect a user to read the docs and learn how to intelligently compile and install the software; and then how use the software in a production environment. So I don't even provide install scripts (aside from those that I write and contribute to Slackbuilds.org), although I do provide a list of packages that they can install in bulk and get about 95% of the way to a full "Slackermedia" install.

It's a distribution that distributes nothing, but it would be trivial to compile packages and point people to them; a distribution of software without the ISO. There are existing examples of this, too; there's the Planet CCRMA for Fedora, kxstudio for Ubuntu, and Studioware for Slackware. All of them are arguably "distributions" in their own right, and not one ISO or re-install to be seen.

It's time to re-think the concept of "fragmentation". Do multiple Linux distributions cause fragmentation? Well, by definition, yes, but not destructively. The real problem is technological inefficiency. Linux doesn't mix well with that, so let's not try to force it into something so clunky.

Let's re-think and re-make the Linux _distribution_. It could and should be so much simpler.

[EOF]

Made on Free Software.

# 29 Unix is not OS X

If there's one thing I can't stand, it's the trope that "Mac OS X is just a fancy UNIX!"

It's annoyed me for a very long time. Initially, it just annoyed me as a user, because, while strictly _true_ , it's a misleading statement. It suggests that if you are using OS X (Unix), then if you want to later use Unix proper, you'll have a head start. After all, you had been using Unix all that time, right?

As I have often said: no. I mean, yes, you've been running Unix. But what you've been _using_ is Cocoa.

Because that's what OS X really is to most people: it's Cocoa. Cocoa doesn't need Unix; it could be ported to Windows, if Apple Incorporated ever wanted to bother. It certainly isn't particularly Unix "aware". Sure, if you move a file in the Unix shell, the file also moves in your graphical desktop view, but that's not because there's any connection between that shell and your graphical environment, it's because both of those two things are both looking at the same file system on the same hard drive.

Don't get me wrong. I understand that there is, inarguably, UNIX happening in the standard-issue (what am I saying? the _only_ -issue) OS X machine. And we unix-geeks should take pride in that; the design schema we "champion" (in whatever way we champion it) is working! it's working well enough to drive a billion dollar company.

But imagine looking at a zebra and calling it a human because it's got bones and DNA and stuff. OS X is so thoroughly determined to NOT act like Unix that calling it Unix is almost antagonistic.

But to this day, people cite stuff like fink, macports, and homebrew. Homebrew even calls itself "os x's missing package manager". First of all, why is it missing? Second of all, OS X doesn't let it (them) work like a package manager on any other unix distribution does. I installed a Python library the other day through homebrew and spent an hour trying to import the thing before stumbling across the widely accepted "answer": install Python from homebrew and use _it_ instead.

No.

No no no.

That's not how it's supposed to work. I don't mean package managers, now, I'm talking about _computers_.

Yes yes, `sys.path` and all that. I don't take issue with how programs find libraries, I take issue with the persistent, against-all-evidence insistence that OS X _is_ Unix. OS X _uses_ Unix tools for low-level access to some components of the system, it uses Unix to manage its file system (I don't mean HFS+ specifically, I mean the file structure of its system), but OS X as a packaged product is not Unix. OS X is Cocoa, and to develop for it you _will_ use its API set.

## 29.1 So What's the Point?

My point is that using the Unix part of OS X is a little like creating a chroot.

No, sorry. Actually it's a _lot_ like creating a chroot.

If I wanted my Unix environment broken out into its own separate system, I'd just dig out a computer from the rubbish bin and run a proper unix on it. I don't see the point in having "it's unix!" on OS X if it's functionally the same as Cygwin on Windows.

"It's Unix!"

Big deal. By these standards, so is my toaster.

[EOF]

Made on free software.

# 30 Nixstaller

There are many ways to install things on the Linux OS, and one of the methods I, as a sys admin, have been playing around with lately is Nixstaller, an easy-to-use and easy-to-make "install wizard" for POSIX systems. My personal use case is very specific, and nixstaller has excellent documentation, so this post is less about how to use nixstaller and more about how I use nixstaller.

You may have seen nixstaller before, whether you know it or not. It's a GUI installer (although it also has a method of running within a shell) and has been used as the "install wizard" for several high-profile games; in fact, probably _most_ high-profile games prior to Steam-on-Linux. If you ever bought a game from the Humble Bundle before they turned into Just Another Steam Key Reseller, then you probably installed something with nixstaller.

Generally speaking, I personally use nixstaller in three different cases:

  * Local bundled install: I have a collection of deb, rpm, or slack packages that need to be installed by instructors who have an admin password but do not have an internet connection.
  * Universal install: I have an application that I want to distribute but I do not know the technical capabilities of the audience.
  * No Other Choice: there is an application that distributes itself in a way designed for system administrators to re-package, not as an installable application by inexperienced end users.

It's important to note that I am in no way implying that I use nixstaller for the mythical "grandparent" user who is "too dumb" to be taught new tricks, because I do not believe that this mythical user exists. However, I do believe that there are users out there (because I deal with them daily) who have no experience with Linux, no internet connection, and really cannot be expected to learn how to do manual installs of a sequence of dependencies and applications during the two hours a week that they are on a Linux machine. There are more important things to teach them. And besides that, by rolling my own controlled installation tool, I ensure success every time, while leaving it up to a user risks a variety of failures.

With that in mind, here is how I use nixstaller.

## 30.1 Download Nixstaller

The first step is to download nixstaller from its homepage.

Curiously, I do not tend to install nixstaller itself. I treat it more like a SlackBuild or RPMBUILD environment, because realistically when one builds installers, one tends to work in a centralised location.

So I usually just download the tar archive of nixstaller rather than the nixstaller installer of nixstaller (yes, that's actually real).

Unarchive and then cd into the nixstaller directory.

## 30.2 genproject

Use the local script `genproject.sh` to generate the default project directory.

    $ ./genproject.sh pkg-foo
    $ cd pkg-foo

Now, here's the thing about nixstaller. I get the sense that it was originally designed for an old style of installation, which is the untar-and-distribute method. You see, before RPM came around, a popular way of distributing code (aside from as pure source code, I mean) was as a tarball (often with the extension `.tgz`). The cool thing about tarballs is that they can contain an effective, yet utterly minimal, mirror of a filesystem. So inside of one of these installable tarballs were folder structures, such as `/usr/bin` and `/usr/share` and so on. This meant that if you untarred the tarball and pointed it at the root of your filesystem, it detected that the same directories existed, and simply took the files out of the tarball and placed them into the existing folders on your system; it basically "over-laid" the tarball contents onto your in-use filesystem. What had been "installed" in a tarball was now installed on your system.

Make no mistake: this was ingeniously simple and effective. Slackware still uses this method today, with the one important addition of keeping log files for each tarball applied, detailing each and every file that it had contained. That way, if you wanted to remove an application, you can do so by pointing an uninstaller script at the log file ("look in this index and un-install all the files listed in it; do not remove parent directories if they are left non-empty").

Brilliant or not, this method is generally rare today. Nixstaller can deal with this method, or it can deal with newer install methods, but it's important to keep in mind that the most basic default behaviour of nixstaller is to look at the payload files, extract them to the install destination, and call it a day. If you are NOT distributing software this way (and I often am not) then you do need to make adjustments to circumvent this behaviour. Just keep that in mind as we go forward, because many of the things I do with nixstaller are to avoid and add onto the default untar-and-quit behaviour (not because I dislike this behaviour, but because many of the installers I deal with happen to not be tarballs).

## 30.3 Place the Payload Files

The next logical step is to place your payload files into the appropriate directory. For me, this is invariably `files_all/foo`. They are placed in `files_all` (as opposed to, say, `files_freebsd_all` or `files_x86_64`) because all of the installers I create lately are either 100% targeted for a specific set of known computers (with the same architecture) or they are being built from source. They are nested into `files_all/foo` because I generally want the files to be placed, nice and tidy, into some temporary directory where I can work on them before they are actually copied to the system. If you are distributing a magical tarball that extracts itself to the filesystem as a perfectly-fitting overlay, then you do not need a sub-directory. But that's not the way I do it.

If you are putting together a more complex installer, you may need to create new directories, one for each arch, or possibly one for each OS (Linux and FreeBSD, for example). Again, nixstaller has great documentation and examples, so you can refer to those to figure out specific requirements.

## 30.4 Place UI Design Files

Another neat thing about nixstaller is that you can customise the look and feel of your install wizard. For me, this means that I can "brand" the installer so that users see for themselves that this was made locally, by real people (an important concept to reinforce at the makerspace where I do this work). There are three types of elements that you can change:

  1. An optional intro image, displayed on the initial screen of your install wizard. Most be a 300x200 pixel image, in the usual common formats (`png`, `jpg`, or `gif`)
  2. A logo, displayed along the top message area within the wizard's interface. There is no size limit specified in the nixstaller docs, but in practise you probably want your logo to be pretty small, like 64x64 pixels or so. If it's too large, then the title area at the top gets really big and makes your installer interface unnecessarily huge.
  3. An icon, displayed in the window decoration and task bar. This must be an .xpm file. It's presumably best to keep this at a standard icon size (256, 128, or 64 pixels). It will be scaled as necessary.

Generate these in Inkscape or GIMP and then place them in the `files_extra` directory, or just skip this step to use the defaults (which are quite nice, themselves).

### 30.4.1 Welcome Screen

You can put text into a file called welcome and its contents will be displayed in the intro screen of the wizard. If you have defined an intro image, then the message is displayed to the right of your intro picture, which can get a little clunky, since the default word-wrap seems to allow for a horizontal scroll bar. which for a simple message like "Welcome to the Foo Installer! Click NEXT to continue." seems a little over-significant. So I keep the welcome message short and simple; I let the user know what they are about to install, and who made the installer / who to contact if there are problems.

## 30.5 Modify config.lua

The `config.lua` file adjusts the look and mode of the installer itself. None of this gets used during the actual install process, it just sets up how and what the install wizard is going to present to the user.

There is a wealth of options available, but the default options are all I use for my work at a local "maker space".

  *  **cfg.mode = "attended"** the point of this installer, to me, is that it is a familiar install wizard with clear instructions on how to proceed, and confirmation of success. So I generally write attended installers. I am assuming, at least for my current audience, that if they know enough to execute the installer non-interactively, then they do not need an installer. This is only my personal use of nixstaller; in fact, nixstaller has great options for **unattended** installation, so feel free to incorporate that into your installers if you need it. Frankly, sometimes when I'm using my own nixstaller wizards on a computer on site, I do wish I made them hybrid, just because the attended mode is _so_ interactive. So think about whether you want "power user" options or not, and choose wisely.
  *  **cfg.appname** I have a script that sets this to the name of the application I am building the installer for (I do this by parsing the basename of the current directory, which I set as pkg-foo). The only place this string shows up are pop-up dialogue boxes telling the user that "new software" will be or has been installed.
  *  **cfg.archivetype = "lzma"** specifies the type of archive you want nixstaller to create when it bundles everything up as a re-distributable, double-clickable install file. This does _not_ refer to the type of compression your payloads use.
  *  **cfg.targetos = { "linux" }** this is the result of the uname command on a system. For me, I am only doing installers for Linux so that is what I use. In the event that I am doing an installer for Linux, FreeBSD, OpenBSD, NetBSD, and sunos, I would add those strings into the list.
  *  **cfg.targetarch = nil** I keep this at **nil** because I am only ever building for one architecture. For more complex examples, just look at the sample installers bundled with nixstaller.
  *  **cfg.frontends = { "gtk", "fltk", "ncurses" }** I include all the possible frontends for the installer.
  *  **cfg.defaultlang = "english"**
  *  **cfg.languages = { "english", "dutch", "lithuanian", "bulgarian" }** I don't need them, but I include all available translations because it just doesn't add enough bloat to bother leaving them out.
  *  **cfg.autolang = true** Let the installer choose the best language.
  *  **cfg.intropic = "intropic_300x200.png"** sets the name of the intro pic to my custom file. Note that its full path is not necessary; its root is automatically set to **files_extra**
  *  **cfg.logo = "tux.png"** the location of the logo file, from **files_extra**
  *  **cfg.appicon = "tux.xpm"** the location of the icon file, from **files_extra**

There are heaps more options, so if you need more flexibility or more features, see the nixstaller docs.

## 30.6 Modify run.lua

Everything up to this point has been sublimely simplistic. This file is where all the action happens, though, and what needs to be done here depends entirely upon what your payloads are made of. This is a really really _really_ flexible script, so there just isn't a generic all-encompassing explanation of what needs to be in it.

I'll step through the defaults and pseudo-defaults, explaining what each function does and how they may be used. But I am leaving a LOT out. Nixstaller can be very cool, very amazing, so whatever your payload may be, you should refer to the docs and find out how to best handle your source files.

  *  **function Init()** As the name suggests, this function is run once when the installer is launched.

Common uses for it are:

    *  **install.destdir** This is a little tricky, because the name is **misleading**. It reads like it sets the location to which files are going to be installed. But actually, it sets the root path to which files are going to be _extracted_ (it should be called `install.start` or `tmp` or something, but it isn't).

If your payload is a **.tgz** that contains a miniature filesystem structure that is meant to be extracted and sprinkled over your filesystem, this could be exactly what you are looking for (although actually you probably want to investigate nixstaller's ability to register the components with the system's package manager instead).

More commonly, I think, this variable should be set to the location to which you want your source files to be placed so that you can then configure and install them programmatically. The only time you actually want **install.destdir** to be set to, for instance, `/` or `/usr` is if you really do intend to have your installer to do a `tar xf foo-x.x.tgz -C install.destdir`

    *  **install.screenlist** is a list of the screens you want displayed in the wizard. The standard set is **WelcomeScreen** , **LicenseScreen** , **InstallScreen** , **FinishScreen** but I omit the license since everything I deal with is open source, and it's either me or an instructor doing the install and we both agree to all OSI and FSF -approved licenses.

You can also create your own screens, and customise widgets and request further information but I don't do that in this use case.

  *  **function Install()** gets run when the **InstallScreen** appears. This is the heavy lifter; everything important happens here.

The first thing you want to do is execute the **install.extractfiles()** method (to extract your source files from the nixstaller archive and place them into your filesystem at **install.destdir** ) and then change to the directory where your source files are located with **os.chdir(install.destdir/foo)** (assuming you have placed, as I do, your payloads in a dedicated dir).

Now you are "in" a directory containing all of your source packages. The question is, what do you need to do with them? Do you need to compile them from source? do you need to use the system's package manager to install them? do you need to extract them, patch them, and then distribute over the system? or something entirely different?

It's all up to you. This is basically your opportunity to run either built-in helper methods, or completely arbitrary code. Helper code exists for compiling from source, and for creating a native package so that a user can easily uninstall the app with their normal system tools. So far I have not needed any of that, but I have used a method to detect whether or not we need to grab an admin password:

          os.chmod("doinst.sh", 700)

      install.setstatus("Installing files")

      if (os.writeperm("/usr") == false) then
         install.executeasroot("./doinst.sh")

      else 
         install.execute("./doinst.sh")

In that example, the doinst.sh command is a custom shell script that I run to perform whatever install sequence I happen to be doing for a given application. In the same way, you can run arbitrary code; just place a shell script that performs whatever install action you require in the directory along with your payloads and tell nixstaller to execute it.

  *  **function Finish(err)** is the final default function. You can use it to do final clean-up tasks, or auto-launch the application that has just been installed, or whatever else you might imagine for the final step. Because this receives the value of err when being called, it only runs upon success of the installation. If there was an error, this step will not run.

My default run.lua file typically looks like this:

    function Init() 
       install.destdir = "/tmp" 
       install.screenlist = { WelcomeScreen, InstallScreen, FinishScreen } 
     end

    function Install()
      install.extractfiles()
      os.chdir(install.destdir/foo)

      os.chmod("doinst.sh", 700)

      install.setstatus("Installing files")
       if (os.writeperm("/usr") == false) then
         install.executeasroot("./doinst.sh")
       else
         install.execute("./doinst.sh")
    end

    end

    function Finish(err)

    end

But, again, that's just how I do it for the applications that I need to have installers for. Yours Will Be Different!

## 30.7 Run geninstall.sh

Assuming everything has been filled in, created, or populated as needed, you can now wrap it all up in one magical installable bundle. Back out in the main nixstaller directory, run the `geninstall.sh`.

    $ ./geninstall.sh foo foo.install

This takes the `pkg-foo` directory and bundles it up as a self-contained, executable package containing everything you a user needs to run the nixstaller UI, and to install the software you are distributing. How do they run it? probably some really complex shell command with lots of fancy switches and args, right?

Well, no. They double-click it.

No, seriously. Give it to a user; one double-click and the install wizard launches, steps them through the install process (such as it is), and the deed is done.

On some desktops, the user is presented with a choice: do they want to run the script in a terminal, or just run the script? First of all, I do not really understand the distinction; if I click on an executable script, then execute the script. But still, the choice may exist on the desktop you use, and the correct option, counter-intuitive though it may be, is to run it in a terminal. This launches a shell that uncompresses the nixstaller archive, which in turn launches the nixstaller UI.

Enjoy.

[EOF]

Made on Free Software.

# 31 Why I Love Linux, Developer Edition

I am periodically stricken by emotional responses to Linux, so I've written several times about why I love it as an operating system, a culture, an environment, a tool, an inspiration, and whatever else. Usually these articles are written from the perspective of a user, because I am a Linux user both at work and at home, or as a sys admin, because I am also a sys admin. Lately, though, I've been doing a significant amount of development, and it's made me see Linux from an entirely different angle, and develop a whole new set of feelings for it.

I've done minor development on Linux before; I started out, like many Linux users, hacking stuff together in order to make life easier. That was, after all, one of the reasons that I got into Linux in the first place: I saw on my old OS that there were processes that seemed so repetitive, or so clunky and laborious, that it just felt like doing them manually was counter to the very point of doing them on a computer at all. We humans invented computers to enhance our ability to work, not to increase our work (is that still the story we're sticking to?) so why were there things on my computer that I could not automate or improve?

And that was a great thing to get out of Linux. After a childhood of not understanding _why_ I did not have the power to make my computer work better for me, I had gained the ability to customise my tools. This was literally life-changing, and frankly it's reason enough to love Linux from a developer standpoint, however "trivial" the code behind the development might have been. Heck, even my first shell script ever, the automation of connecting to my wireless network, signified that in one month of Linux use, I had been empowered to learn more about computing than I had in 5 years of struggling in earnest with my former OS.

One of the funniest, and yet painfully true, articles I've read on the topic of developing on something that is not Linux is by Ted Dziuba. You should read it sometime.

http://harmful.cat-v.org/software/operating-systems/osx/osx-unsuitable-web-development

Ted's article is specific to web development, but looking at this issue more broadly is important, too, because it actually gets even better.

When you're developing software, you don't see things the same way you do when you're just being a user, or when you're a sys admin. Don't get me wrong: I'm not saying that your genetics change or that you become spiritually elevated, I'm just saying that the needs and concerns of someone wearing a developer hat are different from someone wearing a user hat. Here are some ways that a developer's life is better on Linux:

## 31.1 OS as a Platform

Do you think IDEs are neat? then why not buy some property? Unix has developed a truly unique method of making software available to its users: essentially, it stores practically every library and every application _in existence_ on your computer. Ah, but wait, when I say "your computer", I mean in the Sun Microsystems sense, where the **Network is the computer** ; so, really Unix stores every library and application in existence on a server somewhere, and when you want to use it, you grab it from your network, it gets dropped into place, and you're up and running. It's a push-button system that makes it trivial to get the library you need.

Of course, I exaggerate when I say "every library and application in existence", but not by much. If you're developing your own solutions, then there's practically nothing you'll be without. Obviously, if you were entrenched in platform-specific technology prior to moving to Linux, then this won't hold true for you (that's kinda the definition of "platform-specific"), but if you're using cross-platform tools and are willing to use cross-platform frameworks, then you'll just keep finding more and more to use in your work.

The amazing thing about this is that it's _so_ darned profound that users of other platforms have "borrowed" the idea. You don't have to look far to find implementations of Linux-style package management on other platforms, because even if you never touch Linux, you just can't argue with effective development systems. And that's where Unix has really excelled in the development world; it not only set the pace, but it defined the very paradigm. Do you have to develop this way? no, you don't; and some groups don't. But on the whole, the Open Source model pervades, and it's empowered by effective, convenient package management.

## 31.2 Everything is an API

On Linux, I was used to having direct access to anything I wanted: webcams, tablets, gamepads, printers, the entire pci bus, USB profiles...literally everything attached to the computer, whether internally or externally. Simple example: let's say I'm writing an application to take a snapshot from the user's webcam. On Linux, it is literally as simple as gathering data from `/dev/video0` (or `1`, or whatever number of attached cameras you want to use). It's that simple.

You don't really realise just how simple that is until you try doing the same thing on a non-open operating system. On those, you have to use an SDK. Don't get me wrong, that's not always a bad thing by any means, but it is a different way of working. Sometimes, it makes things easier; after all, an SDK usually implies that a "kit" of lots of fancy ready-made tricks come bundled with the SDK. No programmer's going to argue with that. But why does the SDK have to cover _everything_ and be the only avenue to where you want to go? What if you don't want to program in the language the SDK uses? What if all you need to do access one tiny component of the computer, but everything else in your code works perfectly? why should you have download, install, and learn a whole SDK frontend just to do a simple task in otherwise SDK-free code?

The problem with the closed-source "you have to use an SDK" model is that it introduces a layer of abstraction that is non-optional. You _have_ to use the SDK, pretty much whether you like it or not. I'm not opposed to getting a bunch of royalty-free code dumps that I can use, but at the same time, I don't always need it.

## 31.3 Dev Tools Implicit

It's a strangely subtle luxury that sneaks up on you, but on Linux the dev tools are woven into the very fabric of the OS. I don't just mean GCC, because not all Linux distributions do include GCC in your initial install, but even without that, there are so many ways to develop on Linux that you almost can't avoid but be technologically creative. The whole system encourages creativity. Maybe you haven't installed GCC, but maybe you'd like to just quickly shell script a repetitive task? or maybe you want to quickly invent a little Python application, or a Perl application or script. Maybe you need a plugin to convert thumbnails or sound files, or to send emails, or to take a photo for a time lapse art project; the sky's the limit, and that's without anything more than a base install of Linux. I'm talking about a system that could fit on a CD ROM. I've seen closed source applications that can't even boast that kind of download size.

Need bigger and better tools? they're a download and install away. That sounds rather unspectacular, because it is. It's just that simple. You decide you want to develop a C++ GUI application one moment, and you're doing just that ten minutes later.

On a closed system, things are a lot different both technologically and culturally. There are users and there are developers; and the two aren't really supposed to intermingle. And it shows. On Mac OS X, Apple wants you to install Xcode to do any kind of development work. Sounds simple, especially since all the websites declare proudly that Xcode is an easy, free download! Cool!

Except it's not cool. To download Xcode, you need to create a "developer account" with Apple. That's free as well. Sure, it's a little invasive, but you can use a fake email. Then you have to verify your email address, and then log into your new account from within the OS, and then finally download a 4GB compressed disk image.

Wait, 4GB for development tools? Heck, I downloaded a "bloated" Linux OS the other day and it was 4GB; it included the OS, every driver imaginable, not one but five desktop environments, a graphics editing application, an office suite, development headers, development libraries, a compiler or two, and a lot more. So how can Xcode possible amount to 4GB just for a compiler and an IDE suite?

Well, actually it's not really 4GB. That would be crazy. After the compressed archive has been downloaded, you install it once only to discover that the installer is broken (at least during February 2016, the shell installer was silently broken) and so you try again using the GUI installer, and then you wait while it installs, and finally, at long last, you see that the final dev suite for this OS is no less than _eleven_ gigabytes.

I'm not sure which is worse: the convoluted path to the final install, or the final insult of discovering that just to compile a silly "hello world" C application, you had to install 11GB of software with no option for modularity.

Are there hacks around this?

Yes, of course.

Should you have to scheme against your own computer?

Absolutely not.

## 31.4 installing apps, integrating with the system

windows: nsis makes things better but I'll be danged if .lnk files ain't binary mac: rave all you will about .app bundles, but YOU try to create one

## 31.5 Open Source Licenses

Sounds silly to point this out, but the fact that you can live and work in an open source environment without any real effort is one of the most liberating things about developing on Linux. If you've ever had to develop in a closed environment, then you know that _usually_ (hopefully) you have the tools your job requires, but when you haven't got a library for something, you're well and truly blocked. To get back into the swing of writing code, you have to:

  * Find a library that does what you need
  * Verify that the license permits you to use it
  * Before you buy: investigate that the lib will work for you
  * Pay the licensing fee for the library
  * Learn the library
  * Use the library

When you're living in an open source version of reality, those steps get refined:

  * Find a library that does what you need (you may already be using it on your system)
  * Learn the library
  * Use the library

It's half the work, really, at an eighth of the stress.

In practise, the only people who will relate to this benefit are those who've worked in large, closed source corporations. That's where you actually run into these problems. For the home developer who just taps into the Mac or Windows "dev community", it's usually a hybrid situation: you use as much boilerplate code from your parent company as you can, and then grab whatever open source stuff you need to make life easier on yourself. There may be some licensing hoops to jump through when you want to use a library but cannot legally ship it because you are not releasing as open source) but mostly it's just a matter of giving credit where credit is due, or coming up with a silly installer work-around (like how Audacity helps users download and install the LAME library for fear of violating MP3 patents were they to ship it along with their product).

But that doesn't mean it's not valid. For the people who toil in the closed source world by day, open development is a life-changing, liberating breath of fresh air.

## 31.6 Port Everything

There's arguably a syndrome in open source software and free culture whereby people rave about how great it is to have access to source code, or the ability to do one thing or another, but when you look at how often they actually _do_ the thing they are excited about, you find that if it had been taken away from them a year ago, they'd never have noticed.

I find that in the worst example of this, it's essentially accurate; nobody's looking at the code, nobody's using the code, but in those cases, the project is by definition true. In reality, even though a handful of users may not ever bother looking at code and may never bother writing a plugin for an extensible application, there are other users who do.

And it's from those users that open source benefits, and the liberty of being able to look at code and change code is justified.

I say this both as a user and a packager, myself. As a rule, I'm one of those niche users who always seems to dig up old forgotten software with the sudden need to use an obscure feature that nobody else has needed for decades. Or, on the other end of the spectrum, I find myself very often picking up the latest software wanting to run it on not just old hardware, but old hardware with (relatively; to most people) obscure architectures, running Linux.

So what's that mean, in the end? It usually means that I need the source code so that I can make tweaks here and there in order for it to compile and run on the platform I'm using.

There aren't really degrees in this game. It's either on or off; you either have access to source code, or you don't. And when you're somebody like me, you notice it when you don't have access, because there's no negotiation when you try to get the thing running on less-popular platforms.

The interesting thing about this sort of problem is that's rarely a matter of a platform not being supported because it would take too much work to port the code over; it's simply a matter that the code never got compiled for that platform because it just isn't in the 80%-20% split of the platforms the developer felt it necessary to build on. So there's no _reason_ it shouldn't run on a given platform, it just doesn't.

Imagine a light switch. If the light is off, then the solution is to flip the switch to the On position. With closed source software, in this context, there is no switch.

## 31.7 Using the Source

If I were to say that one of the main benefits of Linux is the fact that there are examples of code all over the place, I mean:

There are examples of code **all over the place**. There is example code in _literally_ (in the literal sense of the word "literally") everything you use when you use an open source stack. Want to write a plugin for GIMP or Qtractor or some other media editor? look at the plugins you have already been using and learn. Want to learn a programming language? look at a tool written in that language. Want to write a clone of a tool that's almost perfect but you think could be better? look at its code and avoid the same mistakes. Want to improve something by adding a feature or fixing a bug? look at the code.

Open source means that the source is available for you to look at, at the very least, and quite likely it's within your right to change it. Either way, open source gives you working, in-production samples of development languages and tools that you might be evaluating for your own project. I myself benefit from this profoundly, and in fact I usually have the luxury of approaching it from the other direction; I get used to using certain tools in real life, and then when I decide to write an application, I choose to use the frameworks that are in use on my desktop already. It's the very definition of seamless integration.

## 31.8 Free Agency

There's open source and then there's open source. And Linux is as open as it gets.

Of course, to the pedantic, the distinction is summarised by the term "open" as opposed to "free", but the term "free" is ambiguous at best (it still amazes me to think that there is no adjective form of "liberty" in the whole of the English language). But to me, dwelling on the words themselves tend to lose the impact. The point is that on Linux, you're entirely at liberty to do whatever you want, especially as a developer.

For this point, I must use the term "developer" pretty loosely, because I exist on that spectrum and have for a very long time. As a power user of computers, even when I was or am acting as a regular everyday user, I still have that trace of developer tendencies. I don't take applications as finished products, ever. That's just not how I ever learned to look at computers.

And when I put a developer hat on and do serious coding, I definitely don't look at existing conventions as the boundaries for what's possible for me to create. Linux, with its diverse inhabitants and applications, its multitude of frameworks and toolkits and subsystems, and its complete openness, allows developers, and users who won't settle for less than exactly what they want, to construct anything. Not just what other developers think you want to build, and not only things that fit inside an existing container or within the limits of a specific use case, but anything you can imagine and figure out how to create.

Why would you settle for anything less?

[EOF]

Made on Free Software.

# 32 rm 'rm'

OK, don't really go and `rm` the `rm` command; that's a horrible idea, but the sentiment is sincere. You see, I take issue with the `rm` command. I don't like it as a command; I've never ever come across an intermediate-to-advanced user who claims to never have accidentally lost data because of `rm`. In terms of a command line interface, I sincerely believe that `rm` is the single-most greatest enemy of the user (with the `find` option `-delete` being a less common runner-up).

So yes, I hate the `rm` command. Why? Well...

...not just because it's so dangerous...

...not just because it's almost always included in those "my first 10 unix commands" children's books...

...and not even just because it erases data and offers no "undo" function...

No, the reason I hate it is because it does a _poor job of doing even what it claims to do_.

## 32.1 "When I Delete a File, I Want it Deleted"

One argument I hear a lot when I complain about `rm` is that "I'm an advanced user. When I tell my computer to delete a file, I mean DELETE the file."

Lies, all lies!

If you wanted a file well and truly deleted, then you'd use `shred`. You'd delete a file _and all traces_. But you don't. Nobody does. Because nobody wants to have to make the mistake of shredding important data and having to face the reality that it's _gone_.

You see, in a bizarre twist of complaint-reversal, my issue with `rm` can also be expressed as both:

  * I hate `rm` because it erases data and offers no recourse.

  * I hate `rm` because it does not erase data effectively.

It looks contradictory, but actually they are two sides of the same wonky coin. Let's say we use `rm`. Yay the file's gone. Oh wait, no...I needed that file. Panic panic panic. OK, OK I found a tool called PhotoRec and another one called Scalpel, and these will scan the hard drive for "erased" files, identify them by headers, and rescue the thing I "erased".

Well, heck.

To me that sounds a lot like a program that was meant to _erase_ something but failed to properly do so.

Some people fall back on that shortcoming when they make a mistake, but that doesn't make the tool a better tool when it gets used such that no one minds that it fails, nor the user a better user when some other command happens to be able to rescue data accidentally erased.

In fact, I should file a bug about that with `rm` because if the goal is to erase data from a drive, then it's frequently failing.

In fewer words: I am calling shenanigans.

If you really want a command to erase data, then man or woman up and alias `rm` to something serious about erasing data. Otherwise, your safety net doth betrayeth you.

## 32.2 "I've Never Had a Problem with It"

This is another excuse I hear a lot.

I believe the truth in this statement, because there are only a few times `rm` really screws people over:

  * You are a new user and do not understand how to use `rm` (or, more likely, paths or wildcards).

  * You are an experienced user and make a stupid mistake when composing a command involving `rm`.

Nine times out of ten, these things don't happen. Usually, you're a new user, you carefully delete a file, and you move on. Or you're an experienced user, and you skillfully construct a `for loop` and `rm` a few directories, and then you move on. Not a problem.

Nine times out of ten.

In order for you to hit that tenth time, statistically speaking, you have to use the unix terminal enough that you mess something up involving `rm`. There are users out there who use the unix terminal pretty infrequently, or maybe they use it frequently, but they don't have the occasion to use `rm` all that often. A great example of the former are visual effects artists; they use Linux for the special effects art that they make, but they only use the terminal (if at all; depends on the studio's workflow) to launch an application or send a job to the render farm. Sys admins are often a good example of the latter; they use Linux to maintain servers and network infrastructure, but they don't usually go around deleting files. They have scripts that monitor disk space, and when they _do_ delete files, it's just to get rid of that one tarball they downloaded and can now ditch, not something they repeat often.

So the dangers of `rm` are, basically, mitigated through sheer dis-use.

The problem really surfaces at its worst for people who use the unix shell as a primary operating environment. You know the types; they're the ones whose answer to "what's your favourite file manager?" is `bash`. For them (I mean "us"), it's not a matter of _if_ , but a matter of _when_.

## 32.3 A Sane Replacement

Why do we think it's acceptable for users to do this? Why are we not only handing users `rm`, but actively encouraging them to use a tool that does a bad job at erasing data but a great job at erecting a barrier between the user and the [third-party] **undo** function?

> Let's face it: the responsible thing for `rm` to do would be to integrate something like `scalpel`, because it's already doing a poor job at performing the job for which it claims to exist, so if it's going to leave the remnants of an "erased" file lying around, it may as well provide an option to un-"erase" the file it didn't manage to erase.

I'm not going to write about it extensively here, but there is a better way, and it's to stop using the `rm` command. Instead of doing a bad job at erasing a file, do a good job at moving it to a temporary holding area. We could call this area...oh, I don't know, a "trash" bin (like when you throw a piece of paper in the trash, but do not immediately incinerate it). It's a clumsy analogy that probably will never catch on, but it's a starting point, right?

To this end, I developed a very simple `trash` command. You can download and install it from slackermedia.ml/trashy. It's simple, it's [currently] written in BASH so it's easy to reverse engineer and modify, and it works. It works with wildcards, it works with **find** , it works in loops, it works on files and directories. It moves your stuff to a temporary directory (by default, it uses the trash bin that your desktop uses) and only invokes `rm` when you tell it explicitly to `--empty` the trash (it does not use `shred` by design, as its goal is to maintain levels of **undo** , not zero out data, although with the magic of environment variables, you could swap `rm` for `shred` in `trashy`).

If your inner `rm`-elitist shudders at the thought of moving files instead of pseudo-erasing them, then feel free to set up a cron job that empties `trash` every hour or so. At least then you get a little bit of buffer time before your "erased" files are only un-moved by way of a third party application like `scalpel` or `PhotoRec`. But really, you should be using `shred`, because I'm officially calling your bluff.

If you use the shell on a daily basis for everyday computing tasks, _use trashy_.

[EOF]

Made on Free Software.

# 33 Non Geeks and Linux

An interesting phenomenon happens after you've been a Linux user for a long while: people start referring to you as a "geek" or a "computer nerd". When it started happening to me, I was a little surprised. Was I a geek? I had never thought of myself as a geek. But I must be, since people were telling me I was. And once I started using Linux, I admittedly had started thinking a lot more about computers and how they work. And yet, it seemed a little trite.

On the one hand, it seems that being called a geek or a nerd is a compliment. After all, nerds are smart, yes? People are looking at you, sat at your computer, and they recognise that you have some skill, and they acknowledge you for your talent. That can't be a bad thing. So why doesn't it feel like a compliment? Is it because maybe they're actually looking at you doing _some thing_ they don't understand, so they call you a geek just for the lack of a better term? That's not so much a compliment as it is a dismissal, or at least it might feel that way sometimes.

Maybe it's the same mentality as any other label, like telling a friend about that great new movie you saw, or a really interesting book you're reading. They're not sure they want to invest too much in what you're saying, so they look it up online and they say "oh, it's a sci fi flick" or "oh it's a fantasy book" and they react accordingly. You can try to tell them that they don't get it; it's so much more than "just" another sci fi movie, or just another Lord of the Rings rip-off, but they won't listen, because they looked it up, they saw the label, and now they know. They get it, or at least they think they do.

Or maybe it's also a little like being really excited about a new recipe, and someone saying that while they aren't interested in hearing about it or trying it if you cook it, they certainly see that you spend time in a kitchen.

Or hey, maybe it is what it is: a diverse term that can mean a lot of different things depending on context and who's using it. Maybe you find it offensive, maybe you find it complimentary, maybe you find that it brings people of the same interests closer to you and drives those without those interests away. So we get called "geeks" and "nerds", and we own the term, and "geek culture" pervades, and changes, and adapts. Pretty cool, huh?

The real danger (such as it is) is that "geek" can become a word for "magician" or "wizard". People look at geeks and assume that we (geeks) were just sorta born this way. I realise that some people do seem to have an aptitude for, say, mathematics or for complex logical thought, but not all geeks are innately "geeky". By that, I mean a geek doesn't just sit down at a new OS and automatically know how to re-program it. Seriously, watch the most mystical Linux geek you've ever met try Plan 9 for the first time. It's not impressive.

## 33.1 West Across the Ocean Sea

When I first learnt of open source technology, I was an art student. I didn't own a laptop. I'd never taken any kind of computer class. I knew what I knew of computers from growing up with them, same as everyone else, and from the times that I broke them and had to make them work again because getting them to work again was more important to me than walking away and finding something else to do.

Being a visual arts student, I was dealing with a lot of video and large graphics. Since I was picking up odd jobs in my attempt to make a living from visual art, I tended to encounter _innumerable_ video file formats. It seemed like that would be common, but the OS I was using at the time had been (and still is) designed to only work with a small subset of video formats, with options to pay some extra money to "unlock" more.

That literally wasn't an option for me (because I was scraping by, financially), and besides it just felt wrong. Surely, this wasn't what technology had promised; we can generate digital videos, so surely we must also be able to play them on a computer. We shouldn't have to pay extra for that ability. And that's what led me to a project called FFmpeg. The first implementation of it that I used was a horrifically bad GUI frontend which, even with its flaws, managed to re-encode anything I ever needed it to translate for me. It was my secret weapon and secured me several high-paying jobs as a freelance "video guru" (for lack of a better term).

I can't overstate the impact that FFmpeg had on my life. I'm not talking emotionally, I'm talking about how it delivered what technology had always promised to be.

As a result of finding this strangely powerful and yet totally free FFmpeg, I discovered that Unix existed and was, in fact, now the basis for the OS sold by the computer vendor I'd been using my entire life. I remembered that my father had mentioned Unix as sort of the original computer OS, and that it was something that big companies and the government used.

It was all coming together. I started poking around at the Unix underbelly of my OS.

I read a book about Unix. It was an early edition of the Visual Quickstart Guide to UNIX, for the record. I learned what Unix was, how to use it, lots of commands, and found out about a public Unix server with $0 membership. I joined SDF and have been a member ever since (although I did change usernames about a year in).

As a result of this, people did start telling me that I was a geek. At first, I declined the label, because back in school it had been a badge of shame. Cool kids were not nerds. So I didn't want to be a nerd. But you can only hook up someone's printer so many times before you realise that however trivial it may be, you really do know more about computers than the average bloke.

I eventually faced facts: I had a modest skill that I could market. I could do simple, everyday tasks on people's computers and get money in return. It's like getting paid for eating lunch, or for taking out the trash on Friday, or whatever. Really simple things; an afternoon of work, in exchange for cash. Yes, this was good.

I printed business cards and posted them on the billboard at school, on subway walls, on community boards, and took jobs fixing people's computers. I eventually managed to obtain two independently broken laptops, which I combined into one working laptop. With this, I "modded" my OS so that it booted straight into X11 and the Enlightenment (e16) desktop. I learnt how to find source code, compile it into an application, and install it for everyday use. I started using cool new (very, very old) applications that ran only in a terminal window.

I still didn't exactly know that Linux existed yet. This was all just me messing around in terminals on a proprietary OS that happened to have Unix at its core. At some point, though, I discovered a funny command that would let me play Tetris in the terminal. It was actually in Emacs, but I didn't understand what that meant, at the time, and the OS didn't ship with the Emacs GUI. This, in turn, revealed a license agreement buried deep within the computer. It was the GPL. I sat down and read it like a book; I was entranced. It spoke about an uncanny philosophy based on sharing information and knowledge, ensuring that users owned the _whole stack_ that created the data they generate. It was less a life-changing event as it was a realisation of the feelings I'd been having anyway, and about a lot more than just software.

I remember reading more than just the GPL; I think it must have been some kind of annotated thing, or maybe it was just a README, but through these "hidden" documents that shipped with Emacs by requirement, I learnt that there was an entire operating system that was itself open source. I switched over to it as fast as I could. Sure, I did probably half a year of research (it felt more like a week, at the time):

  * I read books about Linux and how it can be used as a desktop replacing the OS that shipped with one's computer
  * I read Linux "fan" magazines
  * I read Linux and Sys Admin (whatever that was) technical journals
  * I listened to Linux tutorial podcasts

I was getting poorer by the day at this point, being mostly an artist barely getting work, so I was reading all of these things for free in the bookstore. Eventually, it got to be too much for me to bear, and I finally borrowed someone's laptop, got a book with a Mandriva DVD in the back cover, and tried booting into this magical thing called Linux. I wasn't really expecting anything; I think I almost expected it to not work at all. But to my surprise, it booted. It booted to a desktop; I remember a blue wallpaper with a star logo on it. I remember moving the mouse around, seeing that it worked, and it was like a whole new world had suddenly been open to me.

## 33.2 Melodramatic Aside

As a side note, I do wish, in some way, that everyone could experience the same kind of technological love story in their own lives. I look back at those memories, and it's entirely in soft focus with amber gels, warm sunlight, the whole nine yards. It was perfect. Today, it seems people get all the way to the end of my tale, sometimes, in one afternoon (or at least, they think they do). The internet says "Linux!" and you click the link, you download the ISO, you look at it in a virtual machine, and you're done. There's no sense of discovery, and when there is, it's mired by annoying comments online about which Linux is best or why Linux is horrible or whatever the issue is that week.

Anyway, that's a pretty good ending to the story; I investigated, I studied, I booted. Bright white light, all is revealed, roll credits.

But, you see, the story doesn't end there. That's actually the _start_ of my story.

## 33.3 12 Bar Blues

The story itself isn't as pretty as the backstory. Nobody wants to hear the real story, the part where after I have learnt that Linux exists, and once I start actually using it, I realise that while I had built up several skills fixing computers and learning hardcore Unix commands, actually getting used to a whole different style of computing was an entirely different matter.

You know how when you live in America, you think it'd be really nice to visit, say, New Zealand? So one day, you take a holiday (I mean "vacation") in New Zealand and it's great. It has little oddities, but they're quaint and fun. Then you go back to America and resume life. Easy.

Then one day, you get a job in New Zealand. You move there to live. And that's when it hits you: it's different there. All the things you hated about life in American towns are now the things you wistfully, however involuntarily, dream about. Those little quirks you noticed on vacation are everyday annoyances now. It's _hard_ to adapt.

That's what switching your OS is like. It was easy at first, and fun. But then it becomes real life. Everyday life. And sometimes that's not fun. Anything different is bad, and anything similar is never similar enough.

## 33.4 Geek With No Name

So you see, dear reader, the fact that I get called "geek" now is not because I sat in a bookstore reading about the joys of the Linux desktop and the nuances of Unix shell commands.

The fact that I get called "geek" is because I re-learnt the very basics of computers. And I do mean the basics.

For instance, for my _entire_ computing life, the way to eject a disk was to drag it to the trash can icon. Did this make any logical sense? no, but that's how it was done and I'd never thought to even question it, much less think that there might be another way for it to happen. Turns out, there are several other ways for this to happen, and it was on Linux that I got to learn that. Could I have learnt it elsewhere? sure, but on Linux I didn't just learn about physical buttons on optical drives and right-click menus, I learnt about the underlying code that made it happen.

For my entire computing life, the menu bar was at the top of the screen, and it (get this!) _changed depending on context_. Bad UI/UX/Whatever design? "Horrific" is a better word, but I was convinced that it was not only the best but also the _correct_ implementation of the concept. It took me years to adapt to per-window menu bars, and years upon those years to accept that I had been wrong. Only recently did I have the opportunity to try my old OS again, and believe me when I tell you that I was the most surprised person in the room at seeing myself struggle with the inefficiency. (For the record, if a menu bar is non-contextual, it only requires one-click to activate it, whereas a contextual menu bar always requires two unless it's already in focus. To say nothing of the constant roundtrips back up to the top of the screen of the left-most monitor _every time you need a menu_.)

For my entire computing life, installing an application was done by dragging an icon into a special folder, or by running a little install assistant. It didn't take me long to adapt to Linux's method, because it was just so darned easy, but I did miss that "special" applications folder where "all the best" stuff was kept (except when it wasn't, but you learn to live with quirks like that when it's all you know).

Believe me, there were times when the learning curve seemed like it would be too great. There were times when I hated Linux, and software developers, and open source. Heck, there are still times I hate software and computers; who doesn't? We all find a missing feature or a feature that's sloppy, and we bang our fist on the desk and think "what are those developers even doing all day, if not giving me that ONE feature that I suddenly want?". It's human nature, and it happens with everything we ever touch, whether we feel like we've been suckered into paying for something that isn't good enough, or like we would pay anything to make it better.

But I got through that, because I wanted to learn new stuff, and I didn't want to rely on magical computer devices and mysterious benevolent programmers who sometimes, if I prayed hard enough on a shooting star, blessed me with a new feature or a vital bug fix. I wanted to understand what I was dealing with, and at the very least to be able to identify what needed fixing, if not how to fix or work around it.

And that's what a "geek" is. That's what a nerd does. They, er, "we", _learn_. We look at a puzzle, and we insist that it gets solved.

## 33.5 Knowledge is Power, Blah blah blah

Pretty much anybody can learn new stuff. Not everyone has the time for it, or the interest in, learning all there is to know about Linux (and, therefore, computers) and that's OK. But the word "geek" or "nerd" is not the right way to express that. If you really want to learn new stuff, then learn it. Don't be intimidated by someone in a cool t-shirt with thick-rimmed glasses talking about things so fancy you can't tell the nouns from the verbs. Don't go thinking that he or she got to be that way through birthright.

"Well, it's easy for you; you're a geek" doesn't mean anything. That's not how it works.

It was earned, and it wasn't easy, but it's attainable. You might not become a super star programmer who gets interviewed on the evening news about violence in video games or the latest new security flaw, but you **can** get this stuff.

You can be a geek. You have only to learn.

[EOF]

Made on Free Software.

# 34 Ode to Slackware

I thought I'd sit down and write about why I like Slackware, because that's one of those questions that people do ask you sometimes, and that you even ask yourself, because running Slackware in the age of Red Hat and the pop-age of Ubuntu does have weight. Whether you mean to or not, you're taking a stand by running Slackware, so the natural question is _why_? Or maybe, to put words into mouths, the question is in fact _why bother_?

Like everything else, Slackware is on a sliding scale. To a hip and modern Linux user, Slackware is the most arcane system you could possibly run, but to a BSD user it's just a clone and doesn't qualify as literal UNIX. So I'm going to write, deliberately, about why I use Slackware Linux.

I had a notion, initially, that I ought to just give The One Reason for using Slackware. That's not to say that there is only one reason, but I thought maybe I could settle on the one, most important, most significant thing about Slackware that appeals to me.

Turns out, there isn't just one reason. There really are several reasons, and one **one reason** leads to another **one reason** , and pretty soon there are **one reason** s lying all over the place. So let's dive in.

## 34.1 The Classic Reason(s)

The trope about Slackware is this: "it's stable". Most people who say that don't really understand what that means. Does it mean that the OS never crashes? does it mean the applications running on the OS never crash? and if so, what about Slackware makes applications never crash? why don't they Never Crash on other distros? Or maybe it means you can run it for 8 years without ever requiring a reboot? Or is it just, _ya know_ , stable?

It actually means none of these things. When I admit to people that, in spite of it being trite to say so, Slackware is stable, I mean that Slackware is predictable.

It takes a long time for Slackware to get assembled. I'm not on the core team myself, so I don't really know for certain, but I suspect it's because picking out a sensible release from 1481 different projects, without picking releases too far back that they negatively effect one group of users but not so recent that it's unproven, is a pretty serious and difficult task. But that's what the core team behind Slackware does; they survey the landscape, they see what kind of new developments have been happening, what's getting updated, what's changing, and then at some point in this cacophonous melange of bits, they take a snapshot and put it on a disc with the intent that the disc will be in use for the next 3 or 4 years.

It's not magic, they don't do anything terribly special to the software they collect, they just do a whole lot of sensible, informed, and street-smart configuring, adjusting, and testing.

In a way, I've just described any given open source unix distribution. And yet, I haven't; a good number of distributions out there focus on different aspects of software. Some aim to deliver frequent updates so that the user feels like they're getting the important latest versions of software. Others aim to deliver cutting edge code so that developers have a sandbox with all the latest, and potentially buggiest, features. Others are interested in delivering a platform for a specific audience, so it may be strong in one area but lacking in another. You get all kinds, and that's good. With Slackware, you get a predictable, known-quantity of software that appeals to a general user base without any bias toward any one group.

## 34.2 Software and /Extras

What many modern Linux users forget (or don't realise) is that at one point, the software you got on a Linux disc _was_ your repository. There weren't fancy online repositories with an ever-rotating stock of updated software and new releases; you got the wares on your disc, and you installed what you needed when you needed it. If you wanted more, then you could get more elsewhere (like SunSite, and other archives), but generally your _distribution_ was the disc you brought home with you from the computer store.

Slackware still works under that model, and it's downright refreshing. I know that the 1400 packages I install from my install disc are "official" Slackware packages. The core team and anyone brave enough to run -current have tested these packages and are at least nominally familiar with them. There's a sense of curation. The software on the disc and in the optional stuff in the disc's `/extra` directory are what make up Slackware.

You want more? You can get more from SlackBuilds.org and Slacky.eu and AlienBOB and so on. But those aren't Slackware.

Slackware is Slackware.

It's maybe a subtle distinction, but other distributions are harder to pin down. Is distribution **Foo** the ISO you download and `dd` to your thumbdrive, or is it that ISO plus 25,000 packaged applications in its online repository? Do I believe all 25,000 packages have been carefully tested and vetted? How does a user keep up with the changes happening to all their personally essential software? What if I want to update `libQuux` without also updating `libCorge`?

## 34.3 Upstream

Hey, speaking of software!

Slackware packages software by doing three things:

  1. Download the software
  2. Configure the software according to the software's build options
  3. Package the software

And that's it. Here are some of the things it doesn't do:

  * Arbitrarily define requirements and dependencies when no runtime dependency exists
  * Configure builds based on politics or whimsy
  * Add scripts to intercept or interrupt a normal install
  * Provide default post-install configuration
  * Integrate advertising deals and search result payment plans such that users generate income for the distribution
  * Bundle untrusted software at the risk of user privacy or distribution stability

And that's probably not even a complete list.

With Slackware, upstream is king. Not everything can be packaged together in a pure "vanilla" state, because a lot of software out there has lots of config options depending on what other software it's being bundled with, but generally Slackware ships what it gets from upstream sources.

This is huge, for me. I've used many distributions (and remember, I'm including both Linux and BSD, here) that provide "processed" software, as in _hot dogs_ instead of the whole cow. Sometimes that's nice; I'll admit that installing something and actually being able to launch it without reading hundreds of pages of documentation on how to fill in its configuration files can be nice, but at the same time the burden for that experience really is upon the upstream project. It's nice that some people make it easy _post facto_ , but I'd much rather have add-on scripts that I could run as needed rather than defaults that I might not expect or want.

## 34.4 Packaging

I'm handy with RPMs, and even handier with `rpmbuild`, and I quite like the format in general. Macros make the packaging process easier, and the RPM spec file is clean and intuitive.

But let's be honest: POSIX was built for scripting. Anything you can do manually on a POSIX system, you can script, and you can share that script so other people can do what you did. That's the true beauty of POSIX, and it's the very thing that lies at the core of the Slackware user experience, from top to bottom.

The `SlackBuild` package format (such as it is) involves a loosely-standardised (by example) shell script that unpacks, configures, and builds software downloaded straight from upstream. If _you_ do it once, then you can do it 100 other times, and 100 of your friends can do it 100 times. It's simple, it's de-centralised, it's lightweight and scales 100% according to the computer it's run on, and it works. As a package maintainer for SlackBuilds.Org and for Slackermedia, I can attest that it works.

The great thing about this model is that the user can modify the build script with almost no effort. Even less effort, if you write the script to accept a few key variables. It's all done in BASH, so the barrier to entry is that you use Linux. What all of this means is that there are no arbitrary configurations. I can install new software with complete custom options, and I can even re-install stock software by revising Slackware's buildscripts and re-building. It sounds complex, but it's shockingly simple.

The scripting doesn't stop there; the tools that install, upgrade, remove, and track the software are all written in BASH and are not just easy to hack but even portable (I've ported them myself).

## 34.5 Track Record

I might make mention of Slackware's long history. It's been built and distributed by the same guy since 1992, without interruption, and without any major change. I booted into Slackware 1.01, Slackware 3.4, and 14.2 over a weekend, and outside of `pkgtools` there was basically no difference in Slackware core.

To be fair, the same general statement can be made about POSIX in general; it has a pretty good track record and largely has remained unchanged in how it operates, at least at its core. But there has been some major tech that has come and gone over the years, whether it's sound or init systems or package kits or desktops. You just don't get that with Slackware.

## 34.6 Paid

I don't actually like money, and I could go on and on about how money affects things and how humans interacting with money affects things, but I'll stay on topic here and just say that I actually _like_ that Slackware offers a subscription plan. I like that when I want to give money to Slackware, I can. I can donate, or I can purchase something from the store. I have the option to do that.

The reason I like this is because some people do have money, and everyone needs money. I'm not saying this is good, in itself, I'm just saying that realistically, the people building Slackware have to eat (and buy computers, and pay for electricity, and so on). To that end, I like to be able to contribute, when I have the ability to do so, monetarily. Doubly so, since Slackware is the foundation upon which I have built my own income; if it weren't for Slackware, my employment for the past 6 years would have looked a lot different. Sure, I could have built a career on CentOS or Mageia or SUSE or some other stable distribution, but Slackware is what I chose early on, and that's what I use. So I owe it something, and I like to be able to support it.

## 34.7 Unix

And finally, Slackware's stated mission is to be "the most UNIX-like Linux distribution out there". I'm a fan of UNIX. I like UNIX. And this is a bold and possibly heretical statement, but in my opinion Slackware out-UNIXes some of them UNIXes out there (notice how I'm politely refraining from naming names).

The devil is the details, and Slackware's details are really nice.

[EOF]

Made on Free Slackware.

# 35 Colophon

As you may expect after reading this book, nothing but the very best open source software was used to produce it. Since the whole stack is open source, it's difficult to list _every_ open source application I used, but the ones most applicable are:

  * Slackware Linux is the operating system I run (Linux is, technically, just the kernel). According to its mission statement, Slackware is the "most Unix-like of all Linuxes". With it, I get all the benefits I feel I'd get from unix, plus all the benefits of a truly remarkable kernel, drivers, libraries, and community.

  * You thought your text editor was good? Don't even enter the competition until you've tried GNU Emacs. It is to text editors what Linux is to operating systems; total control, total customisation, and ultimate, terrible, awe-inspiring power.

  * Illustrations were done in Krita. I'm not a good illustrator but Krita makes it fun to try.

  * The cover of this book was designed in Inkscape, and it uses artwork from the Creative Commons site openclipart.org; specifically, a computer icon uploaded by user `jcartier`.

Additional art includes:

  * http://openclipart.org/detail/183205/pipe
  * https://openclipart.org/detail/175961/file-icons
  * https://openclipart.org/detail/174316/games

  * All fonts used are also open source. Specifically, I use Kabel (the old official KDE font) and Bazaronite. IBM_Nouveau is the code font. There are some other fonts (like the Liberation family) that snuck in here and there, but mostly those are what I used.

The workflow for this book revolved largely around Pandoc, an amazing little application that parses text from nearly any format and converts it to nearly any other format, and even renders and packages it up as EPUB or similar. Aside from having written a paragraph singing its praises, I am left speechless. This is one of those tools that you use, and then later you sit back and think about, and find yourself shaking your head and mmuttering "This should just _not_ be free!"

But of course, it is. And that's the beauty of this whole culture, isn't it? It's about sharing information, labour, and resources. Small wonder that I'd write in excess of 60,000 words marveling at it.

Thanks for reading.

Oh yeah, by the way, I do a podcast at GNU World Order. You should listen to it.

# 36 Attribution-ShareAlike 4.0 International

Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.

## 36.1 Using Creative Commons Public Licenses

Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.

  *  **Considerations for licensors:** Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. More considerations for licensors.

  *  **Considerations for the public:** By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More considerations for the public.

## 36.2 Creative Commons Attribution-ShareAlike 4.0 International Public License

By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.

## 36.3 Section 1 – Definitions.

  1.  **Adapted Material** means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.

  2.  **Adapter's License** means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.

  3.  **BY-SA Compatible License** means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially the equivalent of this Public License.

  4.  **Copyright and Similar Rights** means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.

  5.  **Effective Technological Measures** means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.

  6.  **Exceptions and Limitations** means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.

  7.  **License Elements** means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution and ShareAlike.

  8.  **Licensed Material** means the artistic or literary work, database, or other material to which the Licensor applied this Public License.

  9.  **Licensed Rights** means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.

  10.  **Licensor** means the individual(s) or entity(ies) granting rights under this Public License.

  11.  **Share** means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.

  12.  **Sui Generis Database Rights** means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.

  13.  **You** means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.

## 36.4 Section 2 – Scope.

  1.  ** _License grant._**

    1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:

A. reproduce and Share the Licensed Material, in whole or in part; and

B. produce, reproduce, and Share Adapted Material.

    2.  **Exceptions and Limitations.** For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.

    3.  **Term.** The term of this Public License is specified in Section 6(a).

    4.  **Media and formats; technical modifications allowed.** The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.

    5.  **Downstream recipients.**

A. **Offer from the Licensor – Licensed Material.** Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.

B. __Additional offer from the Licensor – Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter's License You apply.

C. **No downstream restrictions.** You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.

    6.  **No endorsement.** Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).

  2.  ** _Other rights._**

    1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.

    2. Patent and trademark rights are not licensed under this Public License.

    3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties.

## 36.5 Section 3 – License Conditions.

Your exercise of the Licensed Rights is expressly made subject to the following conditions.

  1.  ** _Attribution._**

    1. If You Share the Licensed Material (including in modified form), You must:

A. retain the following if it is supplied by the Licensor with the Licensed Material:

      1. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);

      2. a copyright notice;

      3. a notice that refers to this Public License;

      4. a notice that refers to the disclaimer of warranties;

      5. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;

B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and

C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.

    2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.

    3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.

  2.  ** _ShareAlike._**

In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.

  1. The Adapter's License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-SA Compatible License.

  2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.

  3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.

## 36.6 Section 4 – Sui Generis Database Rights.

Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:

  1. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;

  2. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and

  3. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.

For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.

## 36.7 Section 5 – Disclaimer of Warranties and Limitation of Liability.

  1.  **Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.**

  2.  **To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.**

  3. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.

## 36.8 Section 6 – Term and Termination.

  1. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.

  2. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:

    1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or

    2. upon express reinstatement by the Licensor.

For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.

  3. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.

  4. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.

## 36.9 Section 7 – Other Terms and Conditions.

  1. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.

  2. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.t stated herein are separate from and independent of the terms and conditions of this Public License.

## 36.10 Section 8 – Interpretation.

  1. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.

  2. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.

  3. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.

  4. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.

* * *

Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the "Licensor." Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.

Creative Commons may be contacted at creativecommons.org
