Category Archives: Planet

The planet aggregated blog posts

Available for work starting February 2014

Quick message to let everyone know that I’m available for freelance work starting February 2014.

It’s an exciting new time for me :) I can finally take on new clients and expand my activities. I had an exciting time so far, and this means a next step for me :) Fun!

Feels kind of redundant to repeat here, but for those new to my blog.. I’m a Belgian IT expert, specialized in Linux and Project management. I am interested in freelance System engineer or Project management challenges

More information and CV are available at http://www.dgtl.be/specialist/GertSchepens

Drop me a line if you need my talents :)
(or feel free to share if you think anyone else does)

Why I shop Google Music

I like buying music. I bought a lot of CDs and contrary to some people I have a wad of CDs. Nothing out of the usual though, a few shelves.
But I grew tired of buying disks, rippint them to digital and never touching them again. I only listen to digital files, I don’t have a decent CD player anymore, if I want to listen to one of those CDs it d have to be in the car, on the PS3 or on the external CD drive for my laptop because even that thing doesn’t have a sodden CD drive anymore. I seldom even use the external one.

Ive been looking for a decent place to buy music digitally, but – as you can see in the many rants on here – it was a long, tough and ultimately frustratingly fruitless search. I used eMusic for ebooks for a while, but the music offering was extremely lacking. There are other places to go, but I require DRM free downloadable files. Not many do this.

Since the Google Music store works here though, I’ve bought 3 cds and am loving the experience.

I bought the new Ozark Henry and am not loving it. I might delude myself that I’d give away or sell that physical disk, not liking it, but looking at the collection, I really don’t usually do that. I bough the new Korn because HELLYEAH and I wanted to listen to it and … Then I decided to keep the spending in check and wait a month. And this is where that decision gets put aside and where I really really love the service!

Some very dear friends of ours recently got married! They had the same wedding DJ as we had, not by coincidence, he’s one of my other very dear friends. (He’s not a real wedding DJ though, he’s just a DJ, radio program and all who, once in a while, spins disks at the occasional wedding) So I expected him to play our opening dance. (That’s Walking in Memphis, by the way, a very very fun song to dance to!) And this is where it all goes to pieces, he didn’t have the song with him! Absurd in these days of digital music, considering he uses Serato, but he recently “upgraded” to a mac and apparently storage is lacking? Quite the upgrade. But I digress.

 

All the technology in the world and I was going to miss our song that night? Don’t fucking think so!
Google music to the rescue :)

  • I opened the music service, bought Rhino Hi-Five: Marc Cohn for €2,99. (fuck the not spending more money, this is an EXCEPTIONAL situation!!)
  • Then I tethered my phone data via wifi to the Apple machine,
  • opened up https://play.google.com/music/,
  • located the newly bought record and downloaded it to the DJ’s setup.
    • (Copyright trolls, fuck off, he has the song in his CD collection about 10 times, he just forgot the disk and I lent him mine. Mind your own.)
  • #InternetWin
  • THEN I lied to my wife about the song just not being there
  • and a while later, she was very surprised to hear it start playing :)

We had a nice dance. People stopped to watch & that’s always nice ;)

 

So yes.
I like gMusic a lot. Buying from my phone, downloading DRM free MP3s.
Take notes, rest of the providers. Or don’t I dont care.

Quick GlusterFS Raspberry Pi money math

I did a quick bit of math, considering replacing my current latest Gluster brick with RaspberryPi bricks. I’m taking 125€ a disk for the HDs, as the actual price doesnt matter that much here. I’m also using the same price for both options, while the Rasp-pi needs USB disks and those will at least cost more or even not be available in the biggest sizes. But let’s ignore that for a second. Let’s also ignore the performance reports I found on Google +.

The Current: A mini ITX brick with 4 HDs in soft Raid 4 on Debian.

  • 4 disks (€500), 3 actual storage disks, 1 loss
  • Mini-ITX hardware, c2-rack-v3 (€400, this is an honest estimate)
  • Totalling €900

 

Compared to the Pi solution, and here the numbers generally go up..

1 disk with redundancy would mean

  • 2 disks (€250), 1 actual storage disk, 1 loss
  • Ras-Pi hardware, 30€ a box, 1 box a disk (€60)
  • Totalling €310 for a 2 disk solution

Which looks really nice, but because the big cost is actualy in the disks, keeping their number down is the most important to keep the price down. Keeping that in mind and doing the math for a storage cluster parallelling the mini-ITX brick, we get the following. Considering replication over 2 disks, then putting those together to make a big storage volume. ie Raid 1+0 with 6 disks.

  • 6 disks (€750); 3 actual storage disks, 2 lost disks
  • Ras-Pi hardware; 30 a box, 1 a disk (€180)
  • Totalling €930

so, more expensive, not to mention the mess of boxes and harddisks and network cables and the switch to connect them all, …. One could consider USB hubs and multiple disks a Pi, but that would not make a considerable difference in price (€90, actually making it cheaper), but would impact performance, I expect.

GlusterFS also offers striped storage, but striping is bad. (no sense in repeating the arguments, it’s all there. And I don’t use a single 100gb+ file; not even one.)

 

An interesting consideration, but in the end, a Pi gluster doesn’t make sense in my situation. It d be real cool though :D

As an aside, I have similar considerations when it comes to growing the cluster with new Mini-ITX boxes, retiring older boxes (most notably the initial 2-disk brick), .. But I will probably have to retire the 2 disk box when the 3 brick cluster gets full and will probably not expand to 4 boxes ever, with the trend in bigger disks over time and the power consumption consideration. But time will tell :)

Newznab – Adentures in indexing

With the recent closing of Newzbin and then a few days later NZBMatrix, the heat is obviously on for Usenet indexers. The funny thing is though that these are, as any search engine, just aggregating meta data. The internet climate has obviously gotten extremely poisonous lately, where the mechanisms that the law provides (DMCA takedowns, etc) don’t satisfy the right holders and sites need to be bullied out of existence with enormous law suits. A bit of a downer for us honest Linux folk, looking to download the latest releases from a.b.cd.image.linux or a.b.linux.iso.

The software that does this however is extremely simple. And unrefined data is available to anyone with a Usenet account. So I figured I’d give the whole thing a twirl and installed Newznab! I’m not about to run an indexer, obviously, but I was curious about it all. I’m glad to say that the traffic this generates isn’t too horrible at any rate. The whole thing hinges on the mechanism that combs through the data and matches up the different files into a release. This relies heavilly in regexes or regular expressions, strings of text that filter out the date from the different tags these files are marked with. The classic version has 2 basic regexes but these did not yield any results in my tests. Wondering what I was doing wrong set me on a quest and as I don’t like to fail or quit, I debugged some , found out I had downloaded a faulty zip, installed the proper file and went looking for a regex that would at least turn out some result. Some chatting on the IRC channel provided the following gem and allowed me to test the software to its fullest.

/^\[.*?(?P[^\(\[\]#”][A-Z0-9\.\-_\(\)]{10,}\-[A-Z0-9&]+).*?(?P \d{1,3}\/\d{1,3})/i

This turned out a wad of refined data, but when looking at the raw data, it missed more than it found! The software supports a lot of different regexes and you obviously need a variety of those to match up the enormous amount of data on these newsgroups.

Curious about it all, I decided to try the plus version, available for anyone who cares enough about the project to donate a small sum. This adds an interesting wad of functionality, among which the option to import files into your search engine! An interesting option that will spare your system the time consuming activity of re-indexing those files! The documentation talks about asking a friend for these files or even trying a simple google search! Depending on how good a friend you asked or how spectacular your search was, this can yield a lot of files and importing these will take a while. A strenuous enough a process to justify an article and some extra code included into the plus package! “How to backfill newznab safely without bloating your database“. The short reason to read this is that the import reads the files; imports the raw data included into your database and extracts fresh refined files from there. If you do that with a gigabyte of data, things won’t be too pretty :) The altered import script paces the import to a more convenient rate.

Unpacking the tar files I found for import wasn’t pretty either. But there’s a simple enough solution for that. A bit of bash script and a bit of patience solves anything :)

for a in *.gz; do tar xvzf “$a”; mv “$a” “$a.OK”; done

This line loops over all .gz file in the directory that its executed in, unpacks the data and moves the original file. Should the files contain more zipped files, you can just execute this again and again untill all .gz files are gone. Since the “mv” part renames the original archives after unpacking, you won’t waste time by unpacking the same file twice.

As the article points out however, the process is not recursive and if your Samaritan put the files in convenient little folders, you will wish it was! Again, a wee bit of bash scripting solves a lot! Also, I want to run the import all day until its done, but I only want to import the binary data at night.

#!/bin/bash
# Analyses all files in all subdirectories for the parameter or if no parameter is given, uses the map parameter.
# Set the different paths and commands to match your install.
# The command status is available in great detail on stdout and in short form in $map/status
# Run this in a screen session to ensure continuous processing.

map=$*

if [ -z "$map"];
then
map=/home/you/somefolder/withfiles/ ;
fi

php="sudo -u www-data /usr/bin/php ";
importmap=/var/www/nnplus/www/admin/ ;
import=nzb-importmodified.php ;
updatemap=/var/www/nnplus/misc/update_scripts/;
update=update_releases.php;
binaries=update_binaries.php;
binary_time=" 01 02 03 04 05 06 07 ";


function import_nzbs () {
        local map=$*;
        echo Import NZBs $map;

        count=$( ls $map | wc -l); 
        old=$(( $count +1 )); 

        echo $( date ) - $map  :  $count >> $map../status
        
        while [ $old -gt $count ] ; 
        do 
                date; 

                echo Scan $count files in $map;
                cd $importmap ;
                $php $import $map true ;

                if [ ! -z "$( echo $binary_time | grep $( date +%H ) )" ]; 
                        then 
                        echo $( date ) - Updating Binaries >> $map../status
                        echo Updating binaries.
                        cd $updatemap;
                        $php $binaries ;
                fi

                echo Releases for $map; 
                cd $updatemap;
                $php $update ;

                echo Counting down from $count ;
                old=$count; 
                count=$( ls $map | wc -l); 
                echo $( date ) - $map  :  $count >> $map../status
                echo eth0: $( ifconfig eth0 | grep "RX bytes" ) >> $map../status
        done
}

for a in $map*/; 
do 
        echo $( date ) - Start $a >> $a../status
        import_nzbs $a;
        echo $( date ) - Stop  $a >> $a../status
done

The script loops through all maps and runs the import until the file count no longer goes down. (As the import script deletes successfully imported data) It first does a file count, then analyzes a new batch of files (a 100 by default, more about that later) , downloads the fresh binaries if the hour is in the $binary_time list, generates refined data, does a fresh file-count and starts again till all files and all subdirectories are done. Quite a bit more convenient to me than the proposed altered screen thing. Not that that one’s bad, mind you.. Just not ideal for what I need :) Also, my script doesn’t run the database optimization script.. Which is a good idea.

Which takes us to the final thing worth mentioning.
The comments of the aforementioned article talk about altering the number of files to find the sweet spot to be able to work through the data as quickly as possible. The default is a 100 and considering the overhead in the other command(s), you can probably put a higher number in there. I tried some settings to find the sweet spot for my set-up. These won’t necessarily be the same on your machine, but it will show that it’s worth checking out! (My setup is a server running on an SSD drive & all the data, the Samaritan files & refined files on Glusterfs clustered network storage. One downside to this is that a “ls” will take a while, certainly with lots of files. Causing extra overhead and making more files worth while

The files setting at the bottom of the altered import script;
line 192: originally “if ($nzbCount == 100)”

Stats for the different nzbCount settings
100: 1100 files / hour
200: 1800 files / hour
400: 2400 files / hour

There. Some data and scripts that would certainly have helped me when I was looking for info about the process. One thing I was curious for initially, but haven’t come around to finding out, is how much up/download this scraping generates.. I’ve got 1,4gb for yesterday but there is a wad of data in there from me accessing data on the server, so never mind that number. Probably rather something like 400mb.

And as en encore, a list of relevant links. SABnzbd; Sick Beard; CouchPotato; Headphones; Newznab; Derefer.me.

Introducing Max Schepens

Today we are proud to present to you, Max Schepens! Our second boy, brother to Aster!

taken 13:23:39, Max was born 9:54, so he's 3:27:39 old in this picture

Max, 3h 27m old!

To quote the official message:

50cm, 3310gr, 100% Max Schepens! Sinds 9u54, 6/11/12 en Gert, Viona en Aster zijn zeer blij!

The official baby card is still under embargo, but updates will follow as the embargo is lifted :) What I can already tell you:

Zoon 3,310 kg 50,0cm
Max Schepens
06.11.2012
09u54 Jette

Vrijblijvend Geboortelijst paradisio-online.be ; tel 053 76 81 96. Of BE66 0356 1109 0543

Also, there is 1 image and a load of extra text on there. And 4 folds :)

We are very happy and proud parents. Also, tired. Off to bed!

Continue reading

Cleaning the XBMC movie collection

I have a lot of movies in t he XBMC server and not all of those are worth watching. Generally, anything with a rating lower than 5 is probably not worth the time!

So I wanted to get a list of bad movies with ratings! You can set up XBMC to store the data in an SQL database, and that’s how I set it up. Mainly because I want to share the database between the multiple XBMC set-ups around the house. Both the SQL database and the regular SQLite database support queries, though this is tested on MySQL.

To execute a MySQL query, you need to log into the database somehow, probably phpmysql or commandline (# mysql -u root -p )  to execute the query, but if you managed to set up XBMC for MySQL, you probably won’t need help there.

All the data you need is in the database and you can find out all about how the database is defined on the XBMC wiki: XBMC databases

The two tables you need are the movie database for a list of your movies, ratings, and any other info. And the info table for information about the file name. The table names are not very self explanatory and you really need the database reference wiki page to get anywhere.  For example, the following is the movie table:

Column Name Data Type Description
idMovie integer Primary Key
c00 text Local Movie Title
c01 text Movie Plot
c02 text Movie Plot Outline
c03 text Movie Tagline
c04 text Rating Votes
c05 text Rating
c06 text Writers
c07 text Year Released
c08 text Thumbnails
c09 text IMDB ID
c10 text Title formatted for sorting
c11 text Runtime [UPnP devices see this as seconds]
c12 text MPAA Rating
c13 text [unknown - listed as Top250]
c14 text Genre
c15 text Director
c16 text Original Movie Title
c17 text [unknown - listed as Thumbnail URL Spoof]
c18 text Studio
c19 text Trailer URL
c20 text Fanart URLs
c21 text Country (Added in r29886[1]
c23 text idPath
idFile integer Foreign Key to files table

 

Next is putting it all together into a pretty SQL querry

select c00,c05,strFilename from movie join files on movie.idFile = files.idFile where c05 < 5 and c05 > 0;

This lists all the movies with a rating lower than 5, because nobody likes a bad movie. And higher than 0 because a 0 rating apparently only happens when the movie is not found. That being said, you might consider a second query to find out what movies aren’t recognized correctly :)

I have a fairly good idea where my files are so I didn’t need the file paths, but if you do, you can get to those by merging the “path” table into the query and adding “strPath” to the select.

And then I opened a browser and deleted all the waste of time junk.

 

As a next step, I looked into the options for cleaning the database and triggering the updates from the command line. The Event Server can do this, but there appear to be some issues, installing the package using the official Ubuntu repository removes the XBMC package; slightly weird and definitely worth looking into, but not today.

 

And for some reason, I like to kill XBMC. And it has 2 processes. And it needs a “kill -9″ by the time I feel like killing. Sooooooooo, I put it into a simple bash script.

kill -9 $( ps -A | grep xbmc | grep -oE ” [0-9]* ” )

 

Hope this helps, use it at your own risk, etc, and feel free to get back with feedback!

Gnome Shell Animated Background

The support for animated backgrounds in Gnome is not new, but the path to actually rolling your own and activating it on your system is all but easy today. Ubuntu offers 2 magically changing backgrounds but no information about how to do this yourself. There is a cute “+” button that allows you to add new static backgrounds but that’s it. To make things worse, the plus doesn’t allow you to add such a magical background either, when you finally create one.

An animated background for Gnome Shell consists of a wad of images you want to use as backgrounds and an XML file detailing how and when to show this data. I was honestly expecting a single file, instead of this multi-file concept, but apparently the multi images backgrounds are not really ready for the common user. A single tar file would solve the problem though so this should not be that far off.
Adding your file to the desktop menu is another problem altogether, as you cant’t use the plus to add the XML file. This requires either a workaround or a second XML file in a separate directory outside of your home directory.

To create such a background, start by collecting a wad of suitable backgrounds to rotate around. (I expect lots of funny moments with NSFW backgrounds and presentations as this concept catches on) and put them together. You don’t actually have to do this, but it helps to have them in one place. I used “~/Pictures/Fallout” as I was creating a Fallout background theme and I like pictures to go into the Pictures map. Next you need to create the XML. This file is formatted as follows:

<background>
<starttime>
<year>2009</year>
<month>09</month>
<day>06</day>
<hour>00</hour>
<minute>00</minute>
<second>00</second>
</starttime>

<static>
<duration>1797.0</duration>
<file>/home/gert/Pictures/Fallout New Vegas/1347371663_intonewvegas_543942.jpeg</file>
</static>
<transition>
<duration>3.0</duration>
<from>/home/gert/Pictures/Fallout New Vegas/1347371663_intonewvegas_543942.jpeg</from>
<to>/home/gert/Pictures/Fallout New Vegas/Fallout-New-Vegas_2010_03-06-10_14.jpg</to>
</transition>

..
</background>

 

The XML consists of a starttime tag and as many static and transition tags as you have backgrounds.
The starttime tag defines when the timer for changing  the background starts running. Any past date will do and unless you want do create a background that follows your day cycle or clock, the time doesn’t really matter either.
The static section defines what files to show and for how many seconds, the transition section defines a transition between the backgrounds and how much time such a transition should take. Your last transition should be back to your first image.

Writing a file yourself is easy but tedious and there are graphical interfaces available that do this for you. Sadly, these do no appear to be included in the default Ubuntu repository. They all appear to be obsolete to some degree. I used Crebs and it works wonderfully though the dependencies fail to define the requirement for the python-glade2 package in Ubuntu, resulting in a crash until you install that package. The internet also provides the XML Background Creator for Gnome3 and a selection of interesting articles and even pre-made downloadable packages, though these don’t exactly install nicely.

Tools

Example

More information at

 

Actually using your freshly created background is not that simple though. The GUI does not appear to provide any mechanism to add the XML file to the menu or to otherwise activate it. There are 2 options but none of them is particularly user friendly.

You can tell gnome to use a certain background using gsettings on the commandline, but this will only activate the background and not add it to your selection menu.

gsettings set org.gnome.desktop.background picture-uri ‘file:///home/gert/Pictures/Fallout New Vegas/Fallout_New_Vegas.xml’

 

The second option is adding the background to the gnome menu. You do this by creating a new XML file, telling gnome where to find your spiffy new background.

You can add the new background to your desktop menu by adding an XML file, telling Gnome where to find your background XML. This XML goes into

/usr/share/gnome-background-properties/

and is formatted as follows

# cat Fallout.xml
<?xml version=”1.0″ encoding=”UTF-8″?>
<!DOCTYPE wallpapers SYSTEM “gnome-wp-list.dtd”>
<wallpapers>
<wallpaper>
<name>Fallout</name>
<filename>/home/gert/Pictures/Fallout New Vegas/Fallout_New_Vegas.xml</filename>
<options>zoom</options>
</wallpaper>
</wallpapers>

This file is fairly simple, it defines the name of the background, where to find the file detailing the background and what options to enable, ie to zoom the desktop.

 

The whole process is pretty painless; except that initially I wrote this”One problem I am experiencing though is the desktop not changing.” (small problem ey) but it turns out that it’s ok. The issue is rather that the desktop doesn’t change at the correct time. It seems that the first section is ignored. It should change around the hour and half hour mark, but instead it’s been changing around 16, 46 minutes. I guess the system resumed rotation at .16 and is rotating from that point on. I don’t mind this so I did not look into this any further. My rotating background works like a charm :)

A script to commit all the GIT projects in a map

I keep all the documents, code and whatnot for DGTL in several git repositories and synchronise those to secret remote locations (cool ey.) But its a hassle to update them all and why keep manually doing mundane tasks when you can replace it with a small (recursive) bash script :)

So I did.

 

The script loops through all maps and checks for the telltale .git folder. It its there, it runs the wonderful “git add . && git commit -a && git push“. This is probably not ideal, but its how I prefer it.

If there is no .git folder, it checks for an executable file with the same name, ie a copy of itself, though it doesnt check the content. A diff or md5 check or check for symlink might do this, but I trust myself to not screw this up. The script then runs that file. There is a risk for recursive hell here with symlinks to directories, but I’ll just not do that and it ll all be ok :)

The code is at http://www.gertschepens.be/bash#UpdateGit

So I put a symlink to the update script in ~/Documents/ and run it there. I put a second symlink in ~/Documents/Projects/. Effectively synchronising all the projects; hooray!

 

A simple script that works :)

Some android apps fresh users might appreciate

No ringing endorsements, just a small list of apps I use and like.

Aldiko Book Reader Premium – Android Apps on Google Play (Or the free version Aldiko Book Reader – Android Apps on Google Play Same features, I just bought it to support the app)

and also, though not for everyone

And in no significant order.

Ubuntu NetworkManager

This NetworkManager thing in Ubuntu is horrible. Well; its not too bad actually but the commandline documentation sucks very very hard!
A quick brain dump after my questing for anyone dealing with this.. excuse my messy text.

A Google search for “Ubuntu NetworkManager Commandline” offers help disabling the NetworkManager; installing it (help.ubuntu) and refers you to the nmcli command. Which is just great help. No info to configure it without using the sodden graphical interface to do whatever.
The info about installing was particularly useless to me since it was already installed though the info about where the config files are and reference to nmcli was moderately interesting. Pity there is no info about those config files; anywhere. So .. I happily dicked around with generating configs on my laptop and using those on the headless xbmc box.
Right; so far my frustration; next up: trying to provide some meaningful info..

You probably won’t have to fuss too much with this as “Network Manager auto creates connections on a best effort base” though sometimes, and certainly on headless machines, you just want a fixed IP..

According to the help.ubuntu info, the configurations are in gconf or /etc. I did not find any of that data in gconf; but did find it in /etc

/etc/NetworkManager/system-connections/

the config files go here.

The directory contains all your network configs and NM promises to try to choose the best possible network connection.
Configuration files are owned by root:root and have 600 rights. They are formatted as follows; you will need to edit the UUID in the config files, check the nmcli part below.

root@Benedict:/etc/NetworkManager/system-connections# cat Wired\ 2.44

[802-3-ethernet]
duplex=full

[connection]
id=Wired 2.44
uuid=6a6e191a-4a8b-47ea-bc38-ef8b98748281
type=802-3-ethernet
timestamp=1318578920

[ipv6]
method=ignore

[ipv4]
method=manual
dns=192.168.2.1;
addresses1=192.168.2.44;16;192.168.2.1;

The “addresses1=192.168.2.44;16;192.168.2.1;” is formatted as IP;Netmask;Gateway

or

root@Benedict:/etc/NetworkManager/system-connections# cat Auto\ C

[connection]
id=Auto C
uuid=b6006760-005b-4fc7-b29a-f3565b6fdd8e
type=802-11-wireless
permissions=user:gert:;
timestamp=1320427786

[802-11-wireless]
ssid=C
mode=infrastructure
seen-bssids=00:18:aa:aa:aa:aa;
security=802-11-wireless-security

[802-11-wireless-security]
key-mgmt=wpa-psk
wep-key-flags=1
psk-flags=1
leap-password-flags=1

[ipv4]
method=auto

[ipv6]
method=ignore

For more info about getting your wireless network up; do a google search; the info is out there!

Next up: nmcli

At any rate; nmcli (command-line tool for controlling NetworkManager) wont be much help beyond listing data as “It is not meant as a replacement of nm-applet or other similar clients. Rather it’s a complementary utility to these programs.” You do need it to at least find out more about the connections, UUIDs and what NM is doing..

“nmcli con” lists the available connections

# nmcli con
NAME UUID TYPE TIMESTAMP-REAL
Wired 472a4a85-b432-446c-a704-c7df7b7f5e3e 802-3-ethernet Wed 11 Jan 2012 12:18:48 AM CET
Wired connection 1 472a4a85-b432-446c-a704-c7df7b7f5e3e 802-3-ethernet Wed 11 Jan 2012 12:18:48 AM CET
C e99af4da-5c7a-495e-b1ec-45c81519ad32 802-11-wireless Wed 11 Jan 2012 12:18:48 AM CET

The wired connection is my fresh; hand made connection; C wireless network was configured using the gnome interface; the “Wired connection 1″ was created automatically by NM. Your new connection wont show up however without the right UUID, You need to copy the UUID for the connection you want to use to the config file.

Restarting the networking will choose the configuration file instead of the best effort config.

# /etc/init.d/networking restart

After restarting the connection, the best effort “Wired connection 1″ vanished. I havent found anything about how to influence what connection is used when the best effort choice isnt the right one, but only need the one so I didnt really look either :)

I hope this helps

Kouter Concerten 2011, Gent

We ontdekten recent de Kouter concerten in Gent; een reeks van concerten op de Kouter tijdens de Bloemenmarkt in Gent. Een geweldig concept en een welkome aansluiting bij ons geregeld koffies slurpen op de kouter! (Niets zo leuk als tussen de bloemen kuiieren en met interessante mensen koffietjes slurpen – Bedenk ik me net dat dat toch een excellente twunch opportuniteit is.. I’ll have to look into that)

Na enkele vruchteloze zoektochten naar een digitaal kouter concerten programma scande ik de folder in die daar uitgedeeld werd; teneinde toch enigzins info te hebben voor de verdere afspraken per email: KouterConcerten2011Gent scan. Een verdere zoektocht op het keyword levert echter verdere resultaten op: uitgent, Fanfare Overmere pdf en meer – had ik die maar gevonden in mijn eerdere zoektocht ;) Vervelend dat er blijkbaar op vlamo.be niets te vinden is :/

Maar goed; zo weten we dus dat ..

 

KOUTERCONCERTEN 2011

  • 7 augustus – VIOS Accordeonclub Brakel
  • 14 augustus – KM Vrije Werklieden Lebbeke
  • 21 augustus – KM De Zwanenzonen Drongen
  • 28 augustus – Gent Symphonic Band
  • 4 september – KH Echo Der Leie Sint-Denijs-Westrem
  • 11 september – KM De Neerschelde Gentbrugge
  • 18 september – KH Sint-Cecilia Eksaarde
  • 25 september – Koninklijke Gentse Politieharmonie

 

En die eerste is de Accordeonclub en hoewel ik alles behalve een accordeon fan ben; ben ik wel Ongelofelijk currieus naar het optreden van morgen! :)

Proximus stuurt vandaag Verwarrende SMSen /cc @belgacom_eva_NL

Ik kreeg zonet volgende sms van Proximus

“Proximus info: Opgelet! Uw internetverbruik bedraagt 591MB. U hebt een abonnement voor 15MB. U betaalt extra, dit verhoogt uw factuur. Bel 080022500.”

Verwarrend want ik heb een data abbonement van 750mb.

Na een belachelijke wachttijd op 080022500 krijg ik te horen dat ik een business user ben en dus naar 080022200 moet bellen.
Op 080022200 krijg ik sneller iemand aan de lijn die me dan vriendelijk intern door verbindt naar proximus waar ik opnieuw zo lang moet wachten (ik heb het gevoel dat ik doorverbonden word naar het eerdere nummer.)

Eens daar wordt me verteld dat ik effectief die 750mb data heb en dat die 15mb in het bericht niet correct is. De vriendelijke mevrouw hoort van haar collega dat die er meer over weet, vraagt me of ik nog even kan wachten en komt even later terug met meer info.

Het bericht is een nieuwe verplichting. Ze moeten een bericht sturen wanneer de 15mb die standaard bij het abbo zit op is en dat hebben ze dus bij deze ook gedaan. Ik heb echter nog een extra pakket van 750 mb dus ik kan nog even verder zonder “U betaalt extra, dit verhoogt uw factuur. Bel 080022500.”

Tot zo ver mijn 18 minuten aan de proximus helpdesk (doch ik wel beters te doen had.) Ik zou bij deze graag willen vragen dat proximus hun SMSen iets duidelijker op stelt en misschien zelfs die SMSen pas verstuurt wanneer het effectief relevant is.