Quotes: Rich Dad, Poor Dad

Taking quotes from books is new to me, but since moon reader does it so effortlessly, I have some notes.
The two dads book takes as lot of flak on the internet and certainly from some people who seem to be writing the same style books with less success. I do wonder where these people find the time to write page after page of out of context crap.

That being said, I enjoyed these following passages šŸ™‚

“one dad had a habit of saying, ā€œI can’t afford it.ā€ The other dad forbade those words to be used. He insisted I say, ā€œHow can I afford it?ā€ One is a statement, and the other is a question. One lets you off the hook, and the other forces you to think”

“ā€œThere is a difference between being poor and being broke. – Broke is temporary, and poor is eternal.ā€”

“Instead, rich dad required his children to say, ā€œHow can I afford it?ā€ His reasoning, the words ā€œI can’t afford itā€ shut down your brain. It didn’t have to think anymore. ā€œHow can I afford it’ā€ opened up the brain. Forced it to think and search for answers.”

“I’ve noticed that my friends with money talk about money. And I do not mean brag. They’re interested in the subject. So I learn from them, and they learn from me. My friends, whom I know are in dire straits financially, do not like talking about money, business or investing. ”

(Rich Dad, Poor Dad)

Quick GlusterFS Raspberry Pi money math

I did a quick bit of math, considering replacing my current latest Gluster brick with RaspberryPi bricks. I’m taking 125€ a disk for the HDs, as the actual price doesnt matter that much here. I’m also using the same price for both options, while the Rasp-pi needs USB disks and those will at least cost more or even not be available in the biggest sizes. But let’s ignore that for a second. Let’s also ignore the performance reports I found on Google +.

The Current: A mini ITX brick with 4 HDs in soft Raid 4 on Debian.

  • 4 disks (€500), 3 actual storage disks, 1 loss
  • Mini-ITX hardware, c2-rack-v3 (€400, this is an honest estimate)
  • Totalling €900

 

Compared to the Pi solution, and here the numbers generally go up..

1 disk with redundancy would mean

  • 2 disks (€250), 1 actual storage disk, 1 loss
  • Ras-Pi hardware, 30€ a box, 1 box a disk (€60)
  • Totalling €310 for a 2 disk solution

Which looks really nice, but because the big cost is actualy in the disks, keeping their number down is the most important to keep the price down. Keeping that in mind and doing the math for a storage cluster parallelling the mini-ITX brick, we get the following. Considering replication over 2 disks, then putting those together to make a big storage volume. ie Raid 1+0 with 6 disks.

  • 6 disks (€750); 3 actual storage disks, 2 lost disks
  • Ras-Pi hardware; 30 a box, 1 a disk (€180)
  • Totalling €930

so, more expensive, not to mention the mess of boxes and harddisks and network cables and the switch to connect them all, …. One could consider USB hubs and multiple disks a Pi, but that would not make a considerable difference in price (€90, actually making it cheaper), but would impact performance, I expect.

GlusterFS also offers striped storage, but striping is bad. (no sense in repeating the arguments, it’s all there. And I don’t use a single 100gb+ file; not even one.)

 

An interesting consideration, but in the end, a Pi gluster doesn’t make sense in my situation. It d be real cool though šŸ˜€

As an aside, I have similar considerations when it comes to growing the cluster with new Mini-ITX boxes, retiring older boxes (most notably the initial 2-disk brick), .. But I will probably have to retire the 2 disk box when the 3 brick cluster gets full and will probably not expand to 4 boxes ever, with the trend in bigger disks over time and the power consumption consideration. But time will tell šŸ™‚

The fruits of peace

If I maintain silence about my secret it is my prisoner; if I let it slip from my tongue, I am its prisoner. On the tree of silence hang the fruits of peace.

Look at me, quoting from books. This one from The Schopenhauer Cure again. This one rings true. Even though an even more fitting quote might be made about diplomacy.

A quote from The Schopenhauer Cure

That man had a little problem with women šŸ™‚

In old age, reminiscing about his parents, Schopenhauer wrote:
Most men allow themselves to be seduced by a beautiful face…. nature induces women to display all at once the whole of their brilliance…and to make a ā€œsensationā€ā€¦but nature conceals the many evils [women] entail, such as endless expenses, the cares of children, refractoriness, obstinacy, growing old and ugly after a few years, deception, cuckolding, whims, crotchets, attacks of hysteria, hell, and the devil. I therefore call marriage a debt that is contracted in youth and paid in old age….

From The Schopenhauer Cure By Irvin D. Yalom

Newznab – Adentures in indexing

With the recent closing of Newzbin and then a few days later NZBMatrix, the heat is obviously on for Usenet indexers. The funny thing is though that these are, as any search engine, just aggregating meta data. The internet climate has obviously gotten extremely poisonous lately, where the mechanisms that the law provides (DMCA takedowns, etc) don’t satisfy the right holders and sites need to be bullied out of existence with enormous law suits. A bit of a downer for us honest Linux folk, looking to download the latest releases from a.b.cd.image.linux or a.b.linux.iso.

The software that does this however is extremely simple. And unrefined data is available to anyone with a Usenet account. So I figured I’d give the whole thing a twirl and installed Newznab! I’m not about to run an indexer, obviously, but I was curious about it all. I’m glad to say that the traffic this generates isn’t too horrible at any rate. The whole thing hinges on the mechanism that combs through the data and matches up the different files into a release. This relies heavilly in regexes or regular expressions, strings of text that filter out the date from the different tags these files are marked with. The classic version has 2 basic regexes but these did not yield any results in my tests. Wondering what I was doing wrong set me on a quest and as I don’t like to fail or quit, I debugged some , found out I had downloaded a faulty zip, installed the proper file and went looking for a regex that would at least turn out some result. Some chatting on the IRC channel provided the following gem and allowed me to test the software to its fullest.

/^[.*?(?P[^([]#”][A-Z0-9.-_()]{10,}-[A-Z0-9&]+).*?(?Pd{1,3}/d{1,3})/i

This turned out a wad of refined data, but when looking at the raw data, it missed more than it found! The software supports a lot of different regexes and you obviously need a variety of those to match up the enormous amount of data on these newsgroups.

Curious about it all, I decided to try the plus version, available for anyone who cares enough about the project to donate a small sum. This adds an interesting wad of functionality, among which the option to import files into your search engine! An interesting option that will spare your system the time consuming activity of re-indexing those files! The documentation talks about asking a friend for these files or even trying a simple google search! Depending on how good a friend you asked or how spectacular your search was, this can yield a lot of files and importing these will take a while. A strenuous enough a process to justify an article and some extra code included into the plus package! “How to backfill newznab safely without bloating your database“. The short reason to read this is that the import reads the files; imports the raw data included into your database and extracts fresh refined files from there. If you do that with a gigabyte of data, things won’t be too pretty šŸ™‚ The altered import script paces the import to a more convenient rate.

Unpacking the tar files I found for import wasn’t pretty either. But there’s a simple enough solution for that. A bit of bash script and a bit of patience solves anything šŸ™‚

for a in *.gz; do tar xvzf “$a”; mv “$a” “$a.OK”; done

This line loops over all .gz file in the directory that its executed in, unpacks the data and moves the original file. Should the files contain more zipped files, you can just execute this again and again untill all .gz files are gone. Since the “mv” part renames the original archives after unpacking, you won’t waste time by unpacking the same file twice.

As the article points out however, the process is not recursive and if your Samaritan put the files in convenient little folders, you will wish it was! Again, a wee bit of bash scripting solves a lot! Also, I want to run the import all day until its done, but I only want to import the binary data at night.

#!/bin/bash
# Analyses all files in all subdirectories for the parameter or if no parameter is given, uses the map parameter.
# Set the different paths and commands to match your install.
# The command status is available in great detail on stdout and in short form in $map/status
# Run this in a screen session to ensure continuous processing.

map=$*

if [ -z "$map"];
then
map=/home/you/somefolder/withfiles/ ;
fi

php="sudo -u www-data /usr/bin/php ";
importmap=/var/www/nnplus/www/admin/ ;
import=nzb-importmodified.php ;
updatemap=/var/www/nnplus/misc/update_scripts/;
update=update_releases.php;
binaries=update_binaries.php;
binary_time=" 01 02 03 04 05 06 07 ";


function import_nzbs () {
        local map=$*;
        echo Import NZBs $map;

        count=$( ls $map | wc -l); 
        old=$(( $count +1 )); 

        echo $( date ) - $map  :  $count >> $map../status
        
        while [ $old -gt $count ] ; 
        do 
                date; 

                echo Scan $count files in $map;
                cd $importmap ;
                $php $import $map true ;

                if [ ! -z "$( echo $binary_time | grep $( date +%H ) )" ]; 
                        then 
                        echo $( date ) - Updating Binaries >> $map../status
                        echo Updating binaries.
                        cd $updatemap;
                        $php $binaries ;
                fi

                echo Releases for $map; 
                cd $updatemap;
                $php $update ;

                echo Counting down from $count ;
                old=$count; 
                count=$( ls $map | wc -l); 
                echo $( date ) - $map  :  $count >> $map../status
                echo eth0: $( ifconfig eth0 | grep "RX bytes" ) >> $map../status
        done
}

for a in $map*/; 
do 
        echo $( date ) - Start $a >> $a../status
        import_nzbs $a;
        echo $( date ) - Stop  $a >> $a../status
done

The script loops through all maps and runs the import until the file count no longer goes down. (As the import script deletes successfully imported data) It first does a file count, then analyzes a new batch of files (a 100 by default, more about that later) , downloads the fresh binaries if the hour is in the $binary_time list, generates refined data, does a fresh file-count and starts again till all files and all subdirectories are done. Quite a bit more convenient to me than the proposed altered screen thing. Not that that one’s bad, mind you.. Just not ideal for what I need šŸ™‚ Also, my script doesn’t run the database optimization script.. Which is a good idea.

Which takes us to the final thing worth mentioning.
The comments of the aforementioned article talk about altering the number of files to find the sweet spot to be able to work through the data as quickly as possible. The default is a 100 and considering the overhead in the other command(s), you can probably put a higher number in there. I tried some settings to find the sweet spot for my set-up. These won’t necessarily be the same on your machine, but it will show that it’s worth checking out! (My setup is a server running on an SSD drive & all the data, the Samaritan files & refined files on Glusterfs clustered network storage. One downside to this is that a “ls” will take a while, certainly with lots of files. Causing extra overhead and making more files worth while

The files setting at the bottom of the altered import script;
line 192: originally “if ($nzbCount == 100)”

Stats for the different nzbCount settings
100: 1100 files / hour
200: 1800 files / hour
400: 2400 files / hour

There. Some data and scripts that would certainly have helped me when I was looking for info about the process. One thing I was curious for initially, but haven’t come around to finding out, is how much up/download this scraping generates.. I’ve got 1,4gb for yesterday but there is a wad of data in there from me accessing data on the server, so never mind that number. Probably rather something like 400mb.

And as en encore, a list of relevant links. SABnzbd; Sick Beard; CouchPotato; Headphones; Newznab; Derefer.me.

E-Ink

I want an E-Reader device, really, I do. Yet I keep going over the list of potential devices, the Kindles, Kobos, Nooks and even the more obscure ones and I keep missing the functionality that would really make the deal for me. I’m not talking about Color e-ink by the way. It irks me that no e-reader manufacturers have bothered implementing the color E-Ink even though it exists, even though that weird windows driven student thing has color e-ink. But I’ve gotten over that. I would be content with a regular e-ink model and even though the lighting tech would be a plus, it wouldn’t factor into the decision. (much.) And I’m not talking about their outrageous selling limited book licences instead of actual books principles. Those are only good for Amazon and maybe the publishers, everyone else loses.

Though in looking at these devices, it struck me, my problem with them is that they are all essentially dumb devices. One trick ponies. And as far as I’m concerned, one trick too few.
My base requirement is reading epub files. I’ve got some epubs on the storage downstairs and I want to start by reading those. I obviously don’t want to jump through hoops to get them there and honestly, would prefer them to be fetched over Wifi from a Calibre server.
Which kind of links with a second desiderata; I want the books to synchronize the current page between devices. I read a lot on my phone when I’m waiting in line, but when possible, I prefer reading on a bigger screen and, well, .. e-ink. The Kindle does this. But only for the books you bought in their store. And their app takes forever to start on my Android phone, which is a deal-breaker too.. Can’t wait too long for a book to open when I’m waiting for the elevator. My reading apps of choice are Aldiko and of late Moon Reader and both these apps have functionality for syncing between devices; I want one of those devices to be an e-ink reader.
Though today I would have wanted to sift through a log file on it and with most devices so far, that would have been a weird experience at best.

We all scoff at the feature phones, in this era of smart phones; yet we all fawn over e-ink readers that offer rudimentary reading options at best. We kan twist and turn to try and make them work for our situation. And when that reaches its limits, claim that it’s a feature that helps us focus on reading and takes way distractions, not a bug by design. (It’s what apple tried with the iPod shuffle and I still kinda hate them for having the gall to market that.)

But in the end, these e-ink devices are all unimaginative, locked down reading devices that barely offer any of the awesome e-Ink applications that anyone with a modicum of imagination could produce.

I still want one though. Preferably soon. And running Android. And more of the above.

Zombies, Run! for Android

I first heard about “Zombies, Run!” from their Kickstarter campaignin 2011. I was intrigued by the concept and kept an eye on it. I seriously considered supporting the project, but because they didn’t support Android at first (they announced support further in the funding drive) and because I had kinda lost faith in the project, I decided not to fund. That was probably my bad, considering it gotĀ 72,627Ā pledged of the $12,500 goal.

By now, however, the game has matured and is available on the Android Play market:Ā Zombies, Run!Ā and features a base building concept with stats and more on the official site: zombiesrungame.com. The game was also featured on the play store and that generally means it’s at least worth a try, maybe even the money.

I have high hopes for exergaming; It’s a simple concept that could make gaming very enjoyable. The principles of gamification are tried and true and using these for motivation seems like a great idea for mankind as a whole. Initial examples of exergaming are about any game that involves a physical side, Dance Dance Revolution and the likes come to mind. The Wii held a lot of promise and was actually advertised and reviewed for this potential, though in the end it was all quiteĀ disappointingĀ to me. The 2003Ā eye-toyĀ for PS2 is an earlier attempt at getting you out of your chair, but failed horribly and the current versions, including the Microsoft Kinect aren’t making a splash either, beyond their initial novelty value. My favorite fitness game was Yourself!FitnessĀ but that’s actually nothing more than aĀ customizableĀ fitness video. (I still have a soft spot for Maya!) It did get me moving though, back when it was almost new. There are attempts at putting the gamification into sports through many social fora and even sites like Fitocracy but they can’t be considered excergaming as such. I’m notĀ minimizingĀ their value though, Fitocracy is a great tool for whoever uses it as a motivator. The excergaming genre has not been developed to its full potential, but Zombies, Run! might just be the one that does it right!

So I decided to buy the game and give it a try. Ill be back later and hopefully at least slightly fitter for more!
Couldn’t have decided to get it at a worse time though, Max is draining our energy and in all honesty, that GTA IV game I bought for PS3 isn’t helping either. Expect a follow up soon though.

In the mean while, the game has gotten a lot of attention lately as Halloween came around and well know bloggers like Violet Blue touched on the subject of running from Zombies to get fit.

Call of Cthulhu: Wasted Land for Android

A opinion on Call of Cthulhu: Wasted Land.

Google Play Store: Call of Cthulhu: Wasted Land

 

I came upon the game through the current Indie Gala IX, that was offering the game in a pay what you want action.

The first experiences didn’t bode well. The game restarts any time your screen locks, and as mine is set to lock very quickly, that gets irksome real soon. The controls appear to be a bit weird, I had to check the tutorial to see how the game works and apparently it does as I expected, so Ill have to figure out how to do that “correctly”? And several times, after starting the game, it crashed my phone. Annoying. But the reviews on the play store seem relatively Ā positive, so it must just be a streak of bad luck. Time to play it some more.

Extended play eliminated a lot of my irks with the game.

  • The controls are not very intuitive, but they suffice.
  • The graphics are not too spectacular but certainly up to par with any android game you might find.
  • The story feels a bit weird to me because I simply don’t see the Chtulhu mythos as a good place for a shooter type game of any kind. I expect lot’s of angst, madness, weakness, etc. Not the testosterone driven gunslinging that this game offers. I might be behind on the Cthulhu game fashion, as I did skip on Call of Cthulhu: Dark Corners of the EarthĀ some years ago, but going back to the original stories, I don’t think it fits. And honestly, I even find the Dark Corners approach slightly more fitting since it at least has the “lonely guy fighting the impossible” angle.
    That being said, the old “It’s WW2, the germans have occult allies” is a well trodden path, Indiana Jones comes to mind, but also the Tannhauser boardgame.

A final conclusion would be that this isn’t a bad game. The controls could have been better, but they’re not awful either, once you get the hang of them. Don’t get this if you’re looking for a decent Cthulhu mythos game, you might want to consider Elder Sign: Omens. It’s more of a boardgame approach to the mythos, but at least it feels right. If you’re on the other hand looking for turn based combat, you might like this game. Although you I personally prefer Ā Cyberlords – Arcology, not an exact match, more of an RPG side to it, but still better. Or for the hard-line turn based fans maybe Battle for Wesnoth even though I haven’t tried it on Android yet.