Wimminz – celebrating skank ho's everywhere

July 15, 2014

The amazing cosmic awareness of AfOR

Filed under: Wimminz — Tags: , — wimminz @ 10:43 am

There are days when I really do wish I could turn my life into a 24/7 unedited live video stream, not because I seek my 15 nanoseconds of fame, or because my life is just so fucking awesome and interesting, quite the opposite….

More of a fly on the wall documentary on the fall of Rome, as seen by Biggus Dickus, hypocaust and sewer worker.

But business and commercial and practical realities mean it cannot be so.

So, hyperbolically, because we all like a bit of hyperbole… fnaaar fnaaar

The day starts with a router swap, the old one is dead, except, when I get there, it isn’t, it’s not working because someone at the ISP made a Radius server edit, and to fix it all they need to do is put the correct CHAP user name into Radius.

You’d think they have this data in their customer files, but no, so the fall back position would be to log in to the router, oops… and get it that way, but even when I tell them the CHAP user name and they sort their Radius server, they still can’t log in to the router, probably because the vty sections of the config is all screwed up, yes, they managed at some point 12 months ago (the customer tells me) to lock themselves out of their own router.

When pondering how this can happen and being passed between various dweebs at the ISP, none of whom appears to know more than one single command or task each themselves, so it takes teamwork Sam, I see how it happened, the guys at the ISP are using the Cisco telnet session to chat to each other, what the fuck could possibly go wrong….

You start to worry when one of those dweebs asks me if I know what the command line “length 0” means in the vty bit of the config….. (term len 0) I just tell him “Don’t ask me bud, I’m just the bloody field tech….” while making cock-sucking motions with my hand and mouth.

So, two and a half hours later we have eventually replaced one perfectly good router with another perfectly good router, and given the ISP the chap username for their Radius server, and sorted remote access to the box for the ISP, this qualifies as a successful job…. rock on Tommy.

Next I’m informed there is an urgent job an hour away down south from me, no sweat, but, the part, well it’s five hours away from me up north, so some dweeb has been given a part that costs maybe £230 new and sent on a 5 hour drive to me.

Then, they work out that the site is due to close in about 6 hours…

There ensue 4 separate telephone conversations with the brain dead fucks in head office, explaining to them that a package a 5 hour drive away from me isn’t going to get to site any quicker if I drive one or two hours north to meet the dweeb driving south half way, on the contrary, due to the vagaries of motorway systems, junctions, and no crossovers between north and south bound service areas, all we are going to do is fuck things up and delay matters.

Besides, I tell them, three fucking times, me and the dweeb in question have already spoken on the phone, it’s sorted, he has the GPS co-ordinates of the ideal place to meet me and do the handover.

Two more phone calls from the brain dead fucks in head office.

No response to the emails from me to them asking for more info, as it stands I have a site address, a closing time, and I’m getting a box with a new router…. you know, things like site data, contact numbers (particularly out of hours) and configuration data might be a fucking idea….

….crickets….

So, I eventually get to site, literally ten minutes before it is about to shut, thinking fuck it, I’m here, the dreaded 4 hours SLA is only just out, with any luck I can convince the store to accept the package before we all fuck off home for the night, 4 hours overtime for me, cool.

The best laid plans… the brain dead fucks of earlier also happened to tell me we were under the hammer, 4 hour SLA with penalties, the store manager, despite it being 10 minutes before closing, can’t decide whether to shake my hand or just drop and give me a blow job, you see, the site has been hard down for three full days now….

Ah.

So I spend the next two and a half hours on site trying to get all the data that I need to configure the new router, I eventually get half of it, in a “just send the cunt a 9 meg attachment with all the documents from the proj, including all 350 client sites, and let him work it out” which would be good, or good enough, if the bundle sent to me was complete, and updated, and contained all the info I needed.

Every 20/25 minutes I’m calling out of hours support, who keep telling me they have called 3rd line support, and they are gonna get back to me asap, two hours in, for reasons that shall become apparent later, I call them back and tell them I’m bailing in 15 minutes, whether the required info is supplied to me or not (as I type this, still no contact…lol) at which time out of hours support admit that when they said they had spoken several times to 3rd line support who were gonna get back to me, what they actually meant was they had left voice-mail messages for 3rd line, who weren’t answering the phone.

Well, that was a waste of time, but hey, just think of the overtime.

My overtime clock is now sat at 5 hours and we all decide to call it quits.

I drive 10 minutes to the frankly ham-beast skank slut I texted while waiting for 3rd line to call me back, yeah, she’ll lock the kids in their rooms if I wanna pop over for a quick coffee and empty my balls into her.

I’m in and out that door in 30 minutes including the leisurely smoke and coffee, her kids are hearing strange men’s voices but she tells em none of their fucking business, go to bed, we go upstairs, we strip, she kneels on the bed, I fuck the slob doggy style, she sucks my cock clean, I dress and leave….

Just time to catch the Thai takeaway before it closes, that’s another 15 bucks on the expense account for this job, chow down on some nice grub, slug an alcohol free beer ( I know, but I’m driving, and don’t want a coke etc) and head home.

I arrive home just after midnight having accrued 7 hours overtime on this job, at time and a half, and having achieved sweet fuck all for the customer…. but hey, I got a free thai dinner and got paid to empty my balls… and Nero could only fiddle while Rome burned.

Still heard -crickets- from my own HQ or the ISP 3rd line support about the job itself.

The site itself is of course still clearly hard down, day four of same…

No doubt someone is going to have to go back to site and sort it.

The site itself, one of those mall retail park places, and everything hangs off a single fucking DSL line, no backup, and the computers struggle to run XP, hell, they would have struggled when they and XP were all new.

The staff, think people of wal-mart, you won’t be far wrong.

Meanwhile everything on site is 5 days out of sync, by everything I mean stock control, inventory, sales, cash, staff hours and rotas, POS specials and updates, the works, and it’s a prace bets now on just how many days out of sync such a creaky and weedy system can function before it goes tits up for good.

Waiting for the crash?

We’re living through it.

October 8, 2013

Stuck in the RAM


I have had jobs where sites stop being able to connect to the mother-ship, usually these are sites using an xDSL modem to log into the mother-ship, and login is of course by the trusty Radius server.

The problem isn’t that the cheapo xDSL modem is dead, though that is always the second thing investigated, or the cheapo xDSL line is dead, though that is always the first thing investigated, the problem is the Radius server just stopped working, and you can “fix” it by making a change that simply should not make any difference, changing the Radius password on the Radius server and xDSL modem / router.

I’ve had this on Cisco kit too, you need to TFTP a patch across so configure terminal and then give it an IP address, give your laptop and IP address and as a final sanity check before starting the TFTP you attempt to ping each box from the other, and it doesn’t work, and you can repeat the process ten times, and it won’t work, but if you reboot the Cisco box it will work first time.

Neither of these problems should exist, within the framework of “things as they should be” or rather “things as they are taught”.. for example it is heresy to suggest rebooting the Radius server, so it is discounted as a source of problems when a client site cannot log into a mother-ship, and for example it is heresy to suggest that any console / command line output from Cisco IOS is less than 100% truthful, and yet, if either of these statements were true, the fixes I used would not work.

When asked what the problem was, I say something “Was stuck in the RAM“, which is of course meaningless *and* inaccurate, but it is an explanation of sorts, and it is *far* closer to the truth than the official answers.

I’m not a coder, but I suspect the truth could be found somewhere in the realms of buffer overflows and bounds checking.

However, nobody calls a senior coder in when a remote office fails to connect to the mother-ship, (which one way or another is what 99% of my day job is about, making two sites connect to each other) so as a result you get anything *but* the truth.

As an aside, before I continue, if you are thinking that these are only problems encountered because I am working with cheap ass kit on cheap ass contracts for cheap ass clients, you would be as mistaken as you can possibly be… I absolutely guarantee that even if you have never set foot in the UK you will know 50% of the end users by brand name and reputation alone, even if they do not have a presence local to you.

Most of the kit is relatively speaking not very much money, anything from 500 to 5,000 bucks a box, and that is not a lot of money for a site that is turning over a million a week or an engineer that costs the end user 250 bucks before I even leave MY home, much less turn up on site… the kits itself is very mediocre quality, hardware wise, and that is me speaking as an engineer. Trust me on this.

Cisco kit sells because it all runs IOS, and finding people with Cisco qualifications who can write / edit / troubleshoot the config files, which are the files that tell the IOS what to do, is about as hard as finding a web designer, worst case scenario is there are several tens of thousands available for not very much about 90 milliseconds away in Mumbai.

This, by the way, is the SOLE reason everyone loves the cloud and virtual machines, virtual machines don’t have ANY hardware, so you NEVER need a field engineer to turn up and move a patch cable, power cycle to unstick the RAM, do an actual install or upgrade, or anything else…

So, back to the plot…

It’s down to ETHOS, car brakes were basically designed so the default state was that they were off, truck brakes were designed so the default state was they were on (and it took air pressure to keep them off).. so you pressurise a car system to make it stop, and you leak pressure out of a truck system to make it stop.

Ask yourself two questions;

  1. Which is safest.
  2. Which is cheapest to make.

Suddenly everything becomes clear.

Unless you are the bit of NASA writing the actual code that directly controls the spacecraft flight hardware, or the bit of GE writing the actual code that directly controls the control rods in the nuke pile, or… and I cannot think of a third fucking example…..  then option 2 always gets a look in.

Most of the time the bottom line is the bottom line.

“Good enough” (mostly)

By definition you are excluding the “one in a million” event from your calculations.

Which is great, *until* that event comes along… luckily for humanity in the sphere of my job until I fix it that means someone didn’t get their wages, someone didn’t get their stock in trade to sell, someone didn’t get a product or service that they were going to re-sell to someone else.

It can all be very serious and even life changing to the individuals concerned, but, the small print can cover that shit, nobody got killed…. fuck em…

We have had quite a few “cascade failures” in teh intertubez, they aren’t yet as serious as the power grid blackouts we have had, but then again the power grid is everywhere and literally in everything, and the net is still a relative newbie, chromebooks running exclusively on data living on a virtual machine in the cloud somewhere and 100% of fast net connectivity even to boot up into anything useful are still rare.

But the times, as Dylan said, they are a changin’

I am seeing, as a result of these changes, where the 1st, 2nd and 3rd level responses to problems simply do not work, because the RAM that is stuck is not in the local machine, it is in a central machine that MUST NOT be rebooted, or worse still, in a cloud virtual machine.

At that point the on the spot field engineer (me) can no longer just ring the remote server engineer, compare notes, agree on a likely cause and course of action, and resolve the problem.

I saw this happen, in the flesh, before my own eyes, for the first time, personally, yesterday, NetApp, unfortunately there were so many levels of virtuality that the server guy couldn’t diagnose which layer or virtual RAM was stuck, or where, and there was no possibility of simply rebooting as that would take the entire enterprise down and trash that whole day’s production, which was already sold and due to be in the shops tomorrow, or changing chap/tacacs/radius logins and resetting the problem that way… no worries, a whole new virtual machine was created, problem ignored.

Fuck it, I still get paid either way.

Asking people like me about my opinion on such things, well, that would be like asking a doctor about disease, fuck that, ask the pharma marketing machine, they have their eye on the bottom line.

%d bloggers like this: