Activation context Internet Explorer issue

Posted at 5:28:15 PM in Recovery (44)

I got this message when I tried to enter a web site address in the internet explorer address bar: "THE REQUEST LOOKUP KEY WAS NOT FOUND IN ANY ACTIVAITON CONTEXT" which didn't make any sense.  Firefox worked fine.  The initial home page worked fine, but I couldn't go to any of the links.  I googled the issue and found that no one else seemed to have an answer either.  After reading several posts, I noticed that several people had tried to downgrade from IE7 or it appeared that in an effort to correct the issue, they did a repair install with the original installation disk. 

In my case, I used the Windows XP SP3 disk to repair files that had been removed by the anti-virus (AVG) before the IE error occured (see post).  The repair install doesn't just restore files that are missing.  It restores all the service packs, patches and internet explorer to the original installation, but makes a copy of the computers registry at the beginning of the repair operation and restores that after setting everything back to it's original state.  Before I did the repair, I had IE8.  After the repair I had IE6.  So, iexplorer.exe gets called the same as it did before, but with registry entries that expect certain files to be in place which aren't.  This creates other problems as well since all the patches that were installed before are also reverted.

I had a copy of the IE8 installation file and ran it.  You can obtain a copy from here.  I used the WinXP 32-bit version in the United States section.  Once that was installed, the error "THE REQUESTED LOOKUP KEY WAS NOT FOUND IN ANY ACTIVATION CONTEXT" no longer showed.  I had about 87 patches to apply using Windows Update and after those were completed, everything seemed to work fine.  Other programs that aren't part of the original disk don't seem to be affected, i.e. Microsoft Office, Peachtree, Quickbooks, etc.  If they used any of the files that came with the original operating system, applying the patches would restore those files to the version expected.


Written by Leonard Rogers on Thursday, December 30, 2010 | Comments (0)

AVG issues

Posted at 2:59:23 PM in Recovery (44)

One of my issues with anti-virus software is their brute force removal of infected files regardless of that file's importance. In this situation, the cure is far worse that the illness.  The computer I worked on today wouldn't start.  It halted with a BSOD (Blue Screen of Death) saying that windows shut down in order to prevent more damage to the system.  AVG 9.0, found a virus in Explorer.exe and winlogon.exe and quarantined them without asking the user.  I'll be the first to admit that most users just press the go button anyway, but in this case, the user said that the computer just shutdown on it's own and then wouldn't restart.  

I didn't know what files had been removed because the quarantine renames the files and gives it a .fil extension.  I used Windows PE to access the hard drive and look around.  I knew that Explorer was one of them as that file is commonly attacked and it was missing and the file size of the last quarantined file matched that of a working explorer.exe.  I couldn't tell what the second file name was and was forced to do a repair install of WinXP SP3.

After the reinstall, I was able to open the virus vault and see what files had been infected and removed.  A common tool I use from this web site: is MalwareBytes Anti-malware.  On this site, you'll find most of the virus infection removal procedures involve malwarebytes software which you can download here.  Another one I use regularly is combofix.exe from the same site.  (Combofix won't work on 64-bit systems or on systems after WinXP as of this writing).

Both of these applications take the time to determine the threat and extract the problem without damaging the system.  I'm not sure why antivirus writers can't make .fil files of critical files and save them for later restoring in case a critical file is infected.  Certainly they must know what files are needed and critical to the PC's health.

Written by Leonard Rogers on Thursday, December 30, 2010 | Comments (0)

Clearwire Part 3

Posted at 6:25:28 PM in Vendors (37)

Before I returned the equipment I did one more test with the home-spot modem.  I brought it to the area that I got a good 4G signal on my laptop and used it for the DSL service through a router.  The modem got 5 full bars of service on a 4G connection, but the performance was dismal.  The person on terminal services kept getting disconnected.  My ping test showed 30% packet loss and radical latency.  I had to swap out the modem with the DSL which is at the 5 mile mark and barely able to provide service, but better than the Clearwire service.

Frequently in the literature and wiki's I've read, they call this a last-mile service and indeed it is.  They only reason anyone should have to suffer with this kind of performance is where there is no other options... no DSL or Cable service.

Written by Leonard Rogers on Wednesday, December 29, 2010 | Comments (0)

Clearwire part 2

Posted at 12:34:23 AM in Vendors (37)

I have decided to return the equipment and cancel my account for Clearwire's internet service.  Primarily because they don't know their coverage and are selling it where you can't use it.

The second day I had the service, I discovered that they captured my internet activity and directed me toward their pages to agree to a contract (the term of which was most likely 2 years, but I was never told and the contract they wanted me to agree too didn't say either.  It covered all terms, from 24 hours to 2 years.)  One good thing I saw on the contract was a clause that said I had 14 days from activation to cancel the contract without early term penalties.  However, the last clause which really bothered me was a statement that I was authorizing them and any of their affiliates access to my phone number for any reason including sales calls even though I had listed my number with the national "do not call" list.

I could not tell if it was required in order to use their service.  I'm sure that listing my number with a do not call list does not prevent any commercial agency from calling to collect on valid debts or even to make other offerings available to me.  It bothers me that they would also list their affiliates which can be so broadly interpreted as to mean anyone except their competitors.

I had opportunity to see the real power of 4G with the laptop's USB from my office in another location.  I was able to obtain 12Mbps which is extremely good for a cellular network.  With that I knew I wasn't getting the 4G network at my home with the non-mobile modem.  When I brought the laptop home, it wouldn't even get a signal even though the non-mobile modem was getting a signal.  I certainly didn't get the 12Mbps that I got in the office, not anywhere close to it.  I watch Netflix on my WII almost every night.  The quality of the service was horrible.  The screen pixilated frequently and I would lose the signal and sometimes have to wait almost 5 minutes watching the streaming to get enough buffer to continue.

In my opinion, Clearwire has a long way to go to develop a mature product.  I understand Verizon is also deploying 4G.  Perhaps I'll test that and see how it works.  I guess the next test is to see how easily it will be to return Clearwire's product.

Written by Leonard Rogers on Tuesday, December 28, 2010 | Comments (0)

Blog modification

Posted at 1:11:08 AM in Web (12)

I didn't like the form for making a comment to display on the page unless the reader wanted to leave a comment.  It made the page look way too busy and I'm of the opinion that not everyone is going to leave a comment.  So, I decided to hide it and leave a link the user could click to get the comment form to display.

I'm know you can hide information on a page using CSS styles and then use javascript to change the style characteristic to show the hidden information.  For example the CSS style to hide everything between <div></div>, the opening tag would have <div style="display:none">.  This also hides the real estate used to display the information.  My problem was the code to get it to show, so I did some research.

The first sight I found was here and I used the following snippet.

<script type="text/javascript">
function toggle(el)
myEl = document.getElementById(el); = ( == 'none') ? 'block' : 'none';
<div id="show" style="display: block;"><a href="#" onclick="toggle('show');toggle('hide');toggle('a');">Show</a></div>
<div id="hide" style="display: none;"><a href="#" onclick="toggle('show');toggle('hide');toggle('a');">Hide</a></div>
<div id="a" style="display: none;">Hello, World</div>

This worked pretty good, but when I clicked to show the page, the page would refresh and position the display to the top.  It would also clear off any data I entered.  I found a comment on this page "The link contains “javascript:”, which indicates that this link doesn’t point to a page..." that got me to thinking. The href="#", while it doesn't go anywhere, could be causing the problem as it points to a page and not to a javascript so I modified the snippet above to:

<div id="show" style="display: block;"><a href="javascript:onclick=toggle('show');toggle('hide');toggle('a');">Show</a></div>
<div id="hide" style="display: none;"><a href="javascript:onclick=toggle('show');toggle('hide');toggle('a');">Hide</a></div>

With that modification, the page stopped refreshing.  You can see the results on the page your viewing by clicking Leave a comment. 

Written by Leonard Rogers on Friday, December 24, 2010 | Comments (0)

Server Crash NT4.0 and Restored

Posted at 12:43:59 AM in Hardware (9)

What a great way to start the holidays.  Just as everyone was wrapping up to leave for the holidays, we discovered that the server drive had crashed.  All the diagnostics pointed to the drive being the issue.

The server is a Dell PowerEdge 2450 using the integrated RAID controller.  The the indication to the user was that the file couldn't be read and programs aborted.  However, on the server console the errors were write-behind cache issues. Mft$ couldn't to be written to, some data may have been lost and this same error displayed on several folders and files specifically on the drive.  The OS on this server is NT4.0.

The OS is installed on a striped array of 2 drives, 9.1Gig each giving a total of 18gig.  Raid 0 is not a configuration that should be used on the OS drive.  The drive was partitioned with 2Gigs for the OS and 14Gigs for exchange server files which was no longer being used and a Dell utility partition.  Thankfully, there was nothing wrong with the drives in that set.  The data drive was a single partition mounted in the RAID as a volume with 146Gig storage.  All the drives were u160 SCSI-2 hot swappable drives.

The backups are being performed by backuppc which has been in operation for about 5 to 6 years.  It has performed flawlessly, but I've never had to restore a whole drive from it before.  I used Acronis V9 workstation to make a bare metal image of the OS drive and all it's partitions.  I also tried to backup the defective drive just in case it was the NT that was causing the problems, but Acronis couldn't back it up either.

Once all the data was backed up, I pulled the integrated RAID controller plug off the mother board and took a look at the drives in Acronis again.  All the drives were uninitialized.  I restored the bare metal backup to drive 0 which was a 9.1Gig HD.  Acronis restored the OS partition as it was and shrunk the partition used for Exchange without any problems.  I was able to boot back into NT4.0 without any errors, but I still didn't have a data drive.

The new drive I purchased wasn't recognized by NT, but the SCSI controller recognized it.  When I did a data verification, it "red screened" right away indicating that the media wasn't any good.  I tried a drive that we had on hand but was not marked as being bad and found the same problem when I did the media test.  I was only left with the original HD that was bad to begin with.  When I did the media test in the in the SCSI controller interface, it only reported 3 bad spots on the drive.  NT also recognized the drive, so I went ahead and formatted it and started the restore.  This drive will have to be replaced, but appears to be usable now.

The drive was purchased from  I ordered it late on 12/22/2010 (Wednesday) and was told that it wouldn't arrive til Monday even with the overnight delivery I requested.  However, it showed up on 12/23/2010 at 11:00am which I thought was pretty good service.  I have submitted an RMA and will follow up with their service on that item.  I found them on, but called anyway because I needed the item to be delivered to a location that was not registered with the credit card.  They said it wouldn't be a problem as long as the ship to location was a business.

It took 2 hours to get a bare metal backup. And then I pulled the plug on the controller and restored the OS.  I spent the next 6 hours trying to get the system to take the drive back without restoring a backup and couldn't do it.  Then 2 hours formatting the replacement disk and 2 hours getting the restore of the data drive going.  The automatic backups had started for all the PCs in the office which cause a lot of problems getting the restore to go.

Backuppc never shows the xfer PID like the backups do.  I kept checking the status and since no xfer PID was showing I thought it wasn't running.  When I checked the server, Rsync was eating a lot of CPU which is usually an indication that the backup is running, so I checked the drive and it was filling up.  The restore operation took over 7 hours.  It restored 55G of data.

I should have selected the current incremental backup as it would have brought everything up to date.  However, I did the full restore and then applied the incremental backup, but the incremental backup is taking just as long to restore.

I was really pleased that Acronis backed up the RAID and allowed me to restore it to a SCSI drive and the system booted.  I have tried this on HP servers and Acronis can't recognized the RAID on HP servers.  They have a bare metal implementation for HP servers, but it requires installing Acronis on the OS.  That becomes a problem when restoring the system because you need to install the OS and then Install Acronis in order to restore the bare metal system which isn't really bare metal.

I approach every restore with a lot of trepidation.  It's bad enough that the data is lost but if the restore doesn't work then the problems really begin.  I have restored a system that didn't have a bare metal backup and all they had was a day old SQL backup.  I had to install everything and all the users and prepare the SQL database correctly then restore the data.  It was 36 hours of work, but on Monday morning the system was back on line and I was a mess.  I never want to do that again.

Written by Leonard Rogers on Friday, December 24, 2010 | Comments (0)

Solid State Drives

Posted at 9:13:13 PM in Hardware (9)

Investigating the deployment of Solid State Drives SSD HD to determine the benefit and cost.

Written by Leonard Rogers on Thursday, December 23, 2010 | Comments (0)

NTBKUP recovery details

Posted at 9:05:26 PM in Recovery (44)

Due to the corrupted bkf files, the headers did not properly contain the drive letter designation required to use the -p option with ntbkup. I also had problems using the -d option. I got the prompt message like this:

DIR Tree warning, 1st node not a VOLUME! Force '?:'

This prompt lead me to believe that it wanted some info to replace the drive letter, so I tried a drive letter or enter Y, but to no avail. However, as of this writing, I discovered that it was working in the background. I was working with 20Gig files when I did this originally and was expecting some kind of response which it didn't give me and so aborted the operation. While writing this document, I used a 6Gig file and as I was writing, the directory listed was produced. I now noticed that the harddrive was very busy after I pressed enter.

The normal script to run ntbkup.exe is:

ntbkup sample.bkf -x -pc:dump

But all this did was start dumping all the files into one directory which the author states that it will do. This cause a huge problem because the directory structure was important, but also that many of the files had the same name. It required the directory structure to keep them from being overwritten. Ntbkup is very happy about overwriting files, so this would not do.

What I needed was to create the directory structure, change into a single directory at a time and extract the files from the backup file that belonged in that folder and then move/create to the next folder. This required that I know the directory that's coming and the contents that should be in that directory and additionally, I didn't want to process the entire 20Gig backup file just to extract 255 files and then start over. That would have been very time consuming and wasteful.

Enter the verbose mode

I ran ntbkup with this command:

ntbkup sample.bkf -v > sample.txt

This produced results that can be seen by following this link. The file size ended up being 32Meg, but the link only has a portion of the file which has some areas of interest which I will include here.

I was interested in this section of the file:
FILE found keyword at offset 0x3ad1800 data from 0x3ad192e to 0x3af3acc
DIRB found keyword at offset 0x3af3c00 1st Stream: NACL
Dir Name[94]: ilesFTRXAcctgInvoicesINVOICES.000U

This had the directory structure, the offset to the beginning of the directory structure and it has the end of the previous directory structure. I indexed the file on the keyword DIRB and extracted the offset to the beginning of the directory, used the next record to find the end of the directory structure and also extracted the directory name. From this information I produced a batch file which would create the directory, change into that directory and then begin extracting only the data from the DIRB offset and end at the last file of the directory or the line before the next DIRB.

Here is a sample of the batch file:
mkdir "%bkfile% ilesFTRXAcctgInvoicesINVOICES.000T"
cd "%bkfile% ilesFTRXAcctgInvoicesINVOICES.000T"
"%bkfile% tbkup" "%bkfile% tbkup2.bkf" -x -jh0:hd58
mkdir "%bkfile% ilesFTRXAcctgInvoicesINVOICES.000T"
cd "%bkfile% ilesFTRXAcctgInvoicesINVOICES.000T"
"%bkfile% tbkup" "%bkfile% tbkup2.bkf" -x -jh62000:hc5d58

As you can see, I made use of the -j option which allows ntbkup to start extracting at the first offset, specified in hex which is the same number system produced in the verbose listing and exit when it's reached the end of the block specified by the second hexadecimal number.

The process of getting these numbers was very involved and I intend to explain that also. The process is procedural and could be scripted, but my understanding of vbscript in MS Access is limited. I could probably write it, but it seems cumbersome that Microsoft didn't include built-in methods to easily access the databases and queries in the same VB database. If I was using a separate VB engine and accessing the databases in another program, I could understand the process of defining each connection and name of each object all the way down the field levels, but since they are in the same program, I find this to be very annoying.

I only have one major stumbling block with the layout of the verbose extract from ntbkup. In the middle of a block of data, say from folder T to folder U the offset suddenly jumps back. Sometimes this jump is huge, going to the beginning of the file, which happily causes ntbkup to start extracting everything into folder 124 until the new end is reached. I don't know why it would make such an erratic shift except that because these files were erased and frequently overwritten by other backups that some overlaying may have taken place. These jumps back to beginning points in the file are not frequent, but needed to be looked at in order to keep from dumping the whole backup file into one directory and then doing it again later on in the restore process.

See this sample for a jump back in between one folder group.

Written by Leonard Rogers on Thursday, December 23, 2010 | Comments (0)


Posted at 8:38:00 AM in Vendors (37)

I have been attending the computer fair for several months and walked past the booth for Clearwire (also known as Clear) and ignored them because I thought it was just a gimick to sell cellular internet access, which I didn't need.  Little did I know that they were also selling a fantastic alternative to DSL and Cable internet services for the home.  Also over the cellular network.  4G has arrived!

I finally discovered their offereings while installing a brand new laptop for a client and during the install, I kept getting a pop up message about WiMAX, which I googled.  WiMAX is a built-in cellular device that allows you to connect to the 4G network and get phenominal download speeds.  The fact that it's built into the laptop means you don't have any USB device to lose or get broken off at the neck.  It means, whereever you have your laptop, you have internet access... in theory.  The wikipedia page for WiMAX explains a lot about the service deployment and possibilities.  They specifically talk about Clearwire which I then went to look up.

Sure enough, Clearwire is the same company that I have seen at the computer fair under the name Clear.  You can check out their web site at  They advertise download speeds of 6Mbps down and 1Mbps up which would be pretty good for home service and would be awesome for mobile service.  I didn't understand the home service at first.  I believed the service was associated with a mobile unit such as a USB or the new WiMAX devices installed in laptops, so why the home service?  They also advertise phone service, but the modem I received had a note that phone service would require an additional connector to allow the phone to be attached.  I get the impression from this that the phone service isn't quite ready yet.

I bought the bundled service with a home spot access service modem and a mobile USB device.  The equipment was free, but I was told it would normally have been available for purchase at 80 for the modem and 85 for the USB connector or I could get them on a lease option.  But free is better, I was told that no lease option would suddenly activate after a certain period of time that the equipment was mine to keep.  The service included one month free which was a $60 value and they broke that up over the first 2 months.  Total to activate the service and to get priority shipment was $32.00.  I got the hardware in 2 days delivered to my house.

The setup couldn't possiblity be easier.  The home-spot modem is a little larger than a standard router.  It has two ports, one for electricity and one for your ethernet connection.  On the front edge are 5 lights that when plugged in all light up with the center light flashing.  This continues for about 1 or 2 minutes then they all flash in sequence.  The service is ready when the lights become solid on with no flashing.  The 5 lights actually indicate the signal strength, 5 lights indicating full signal and fewer lights indicating less signal, just like your cell phone attenna signal strength indicator shows. 

The instructions suggest that you put the modem as close to a window or external wall as possible in order to improve signal strength.  I found it does make a difference.  Sitting next to the wall, I got 2 bars.  I got 3 bars when I moved it next to the window and the 4th bar would light quite frequently when I opened the window.  I don't know how true that actually is.

My first problem that I called tech support about was the password they supplied with the modem setup instructions wasn't working.  They provided CLEARWIRE123.  That didn't work and neither did the any other passwords that I would normally see with setting up a new modem.  I tried password, 12345, admin, blank and none of them worked.  I tried clear123, clearwire123, Clear123, Clearwire123 and several others before I got support on the phone.  The first thing they suggested was that I try password.  I thought that was funny, why would they help me guess at the password.  After they put me on hold they came back with the password motorola, which worked.  So much for documentation or even creating the interface like their instructions suggest that they did.

I was curious though about my call to tech support as there was a message that indicated there was a problem in my area and to hang up and call again later.  So, I asked what the problem in my area was.  The technician couldn't tell me.  However, since the password issue didn't keep me off the internet, I ran some speed tests and found the performance absolutely dismal.  I felt for sure the "problem with the service in my area" had something to do with it so, I explained why I was asking about that problem. 

I really doubted that any problem they would be working on would cause the problems I was seeing, but I wanted to hear what this issue was.  I used to test out my service and found my IP reported that I was in Florida.  Naturally, chosing any test location in Florida would make my test that much worse.  Sure enough, my latency was 311ms to 456ms.  When I picked a location closer in Califonia the latency dropped to 145ms which isn't good, but far better than the original test.  The download was 1.3Mbps and the upload was 70kbps.  That didn't change when I selected a site more local.  Web pages loaded very sluggishly, worse than any cellular data service I've ever used.

Which brings up the connection to tech support.  Our conversation was very scratchy also and often faded where she couldn't hear me.  That's an indication of VoIP and I wanted to ask the tech if they were using their own service to pass the voice communications over, if so, then that was probably a good idication of why I wasn't getting good service.

When I reported the horrid indications I was getting, the tech suggested I try, which I was already doing.  They finally told me that the problem was "on-going maintenance" and it should be repaired in 24 to 48 hours.  That of course tells me nothing, but I did note that the service was very poor and would I be able to return my equipment and cancel my plan.  The tech just laughed and told me that everything would be alright and not to worry.

Good tech support.  I'm willing to wait a few days to see if it will improve.  In my research and discussions with the sales and tech support people, it appears that the service runs over Sprint's network.  This is the only service I've ever used that showed that they covered the area I was in, but I didn't get coverage there.  There wasn't even a hint of service.  When I called they would tell me that they are planning deployment in that area but they couldn't give me a date.  After 6 months of that run around, I cancelled my account.  That was 15 years ago and it certainly seems that the same run around tactic is still being employeed today.

There is no way to tell if I'm even getting 4G service from within the modem's interface.  I asked the technician if there was a way to tell and they seemed totally confused as to what I was asking.  They looked up the address and told me I had 2 cell towers very close and that I am for sure using the 4G network. 

"Right, but these aren't 4G speeds, and not anywhere close to the advertized speeds.  What's the problem?" 

"Move  your modem closer to a window or open the window sir." 

"I've done that, I have 3 to 4 bars of signal strength." 

"Then you should be getting good service." 

"That's my point. I'm not. How can I tell what service I'm using?"

" I'm sure you're using the 4G network."

"Then why aren't I getting 4G speeds?" 

Long pause. 

"How long till this 'on-going maintenance' is completed?" 

"Within 24 to 48 hours sir."  "

And if it doesn't improve after that time, can I return my equipment and cancel my account?" 

<laughs> "Your service will be fine sir."

It's a good concept.  It just needs a little more help getting off the launch pad, like getting 4G service actually out where it can be used. 

Written by Leonard Rogers on Thursday, December 23, 2010 | Comments (0)

Quickbooks PDF printing issue

Posted at 9:58:57 AM in Software (15)

One of the biggest issues in upgrading to Windows 7 that I've experienced is with 64bit drivers, old printers, all-in-one printers, scanning and Quickbooks pdf printing.

This the reason from the QuickBooks support pages: "The QuickBooks PDF Converter settings are not compatible with some settings on some 64-bit versions of Windows 7, Vista, or XP operating systems." (This link also has the link to the solution that I used today.) 

In a previous installation, the customer was using QuickBooks 2009 and upgraded to QuickBooks 2011 and that fixed the problem.  Upgrading to the lasted release is the number one recommended solution, but not everyone is willing or able to do that. Today I used the QuickBooks PDF Diagnostic Tool on a QuickBooks 2008 installation.

When I ran the tool, it opened a DOS window with the message, Please wait while the diagnostic checks for issues.... and then indicated that I should test QuickBooks and if I couldn't create a PDF or email a PDF document from within QuickBooks then I should follow the steps on the web page that would open shortly.  When I clicked OK, the web page opened, but the diagnostic didn't close.  Normally I'd click the diagnostic DOS screen closed and try the test, but I left it open this time thinking that it may actually be monitoring the PDF creation process. 

When I opened QuickBooks, the PDF creation worked fine and I was also able to send PDFs via email without a hitch.  This worked very nicely.  Kudos to Intuit.  

When I closed the web page, the diagnostic also closed, so apparently it was monitoring the status of that window.  The only thing I noticed was the PDF printer for QuickBooks was set as default which caused problems with other printing.  I set the original printer back to being default and QuickBooks continued to work correctly.  All is well that ends well. 

Written by Leonard Rogers on Wednesday, December 22, 2010 | Comments (0)


Posted at 11:11:14 PM in Vendors (37)

This is the first time I've ever seen a Telecommunications vendor ever come pick up their equipment.  I've been with Verizon, UUNET, Qwest to name just a few and have worked with many others and I've never seen any of them come pick up their equipment.  I have stacks of routers to prove it and most of my customers have units in the telephone closet that they don't know where it came from or what it's being used for indicating there's a huge graveyard of dead network appliances all over the country. 

However, today, Telepacific came to pick up a router from the client that I'm currently working for.  This particular router was installed 5 years ago and was never used in that 5 years.  I submitted a request for credit from them when I discovered the additional billing for a T1 we never used.  This is a story that needs a little background...

This client has been using Telepacific for about 10 years for internet T1 service.  We used a different vendor for our telephone services until about 5 years ago when we got a better deal from Telepacific.  There was an onsite computer tech who managed most of the installation and at the time I noticed there was 2 T1s for internet service which we didn't need.  I told the computer tech to get them to remove it and even called Telepacific myself and was told it would be taken care of.  We never plugged in that router and I assumed it was taken care of.  Three months ago, I was asked to look at the Telepacific bill and I discovered that we were paying for too many T1s and found they they never took the circuit off.  

Telepacific now owes us 35K credit, but I don't know if we're going to get it.  I realize they had to pay for the service (though it I don't know how much) and they have a contract that we signed.  I'm just hoping they will meet us half way.  The killer for me was that I never communicated my request in writing.

I cannot help but wonder why the sudden interest in picking up equipment?  Are they trying to recover costs?  Is the economy that bad?  Or was it an excuse for an employee to stop by and say hi to and old college buddy.


Written by Leonard Rogers on Tuesday, December 21, 2010 | Comments (0)

The Bing Experince

Posted at 5:23:06 AM in Web (12)

Well I feel foolish.  I wrote this based on my frustrations from a couple of weeks ago.  I decided after writing this that I'd go back and see how Bing was doing and they crawled my site yesterday.  Perhaps their servers took a Christmas and New Year's break.  Who knows.  But on their forums, Site indexing is the subject with the most posts.


Even though my site wasn't being indexed and the control panel summary profile wasn't showing that I was being indexed, the tab that showed sites that pointed to me and sites that I was pointing to was being shown.  Go figure.

After several weeks of trying to get Bing to crawl my web site, I've yet to be crawled. The other two search engines that I submitted my sitemap to crawled my site within 24 hours. Bing doesn't even show the reason it's not being crawled.  

I investigated the forums on Bing's site and found that this is a common problem. There are tons of suggestions on how to overcome the limitation and get Bing to crawl your site.  Apparently, Bing relies on "backlinks" (other sites that link back to your site). There are nifty ideas on how to get this done which is signing on to forums that Bing does crawl and putting a link back to your site from there. Supposedly, that will get your site crawled. But when I searched my own site using the site: keyword and looked for  I came up with several that linked to me. Obviously, Bing has those sites indexed, but has not, to date, indexed mine.

Written by Leonard Rogers on Tuesday, December 21, 2010 | Comments (0)

Web master tools

Posted at 5:21:28 AM in Web (12)

Implemented Google Analytics, Google Webmaster Tools, Yahoo Webmaster Tools and Bing Webmaster tools (The experience with Bing was less than satisfactory).


Written by Leonard Rogers on Tuesday, December 21, 2010 | Comments (0)


Posted at 5:20:57 AM in Web (12)

 Implemented surrealcms on this web site and applying it to other web sites I designed.

Written by Leonard Rogers on Tuesday, December 21, 2010 | Comments (0)

Hard Drive Failure

Posted at 5:17:47 AM in Hardware (9)

Raid Drive Failure:

The company had setup a junk raid server to handle bulk image storage for a document imaging service. The document imaging program was Docuware which used SQL as a database and allowed for storing the images on any attached storage device. However; we were told that certain options of the software we had purchased would not work unless the software was installed with the images on the same server with the SQL installation. The option that we were not able to use was the ability to store the images on CDs so that the CDs could be shipped to another location and viewed there.

Apparently, the RAID already had a faulty drive in a 3 disk configuration. The created a potentially bad situation. We had identified the incorrect drive as being the problem, but didn't think it was an issue as we were backing up the data to an external removable drive.

The backup software we used was NTBackup with a backup script that ran in the scheduler. It backed up only the changed or added files during the week nights and on Fridays it backed up the entire drive. The backup file size was approximately 69Gig with over 800 Tiff and Jpeg images an a huge directory structure.

A description of the file structure would be helpful as the recovery process depended a lot on knowing how this structure worked and what the file types were. Docuware creates the directories and the file names using a math algorithm which allows it to determine where the document can be found just by the number of the document. Each directory can hold 255 images and then a 3 digit folder increments up to 255. When the folder reaches 255 and new upper folder is created and then sub folders are created under that folder and the increments start over at 000. So a folder for royalties might be in the folder royalties.000/000/00000001.001. The extension on the document increments to show the number of pages in the document and the filename continues to increment in order to limit the number of files that are contained in one directory. Because windows file types are determined my extension, this system doesn't allow for the documents types to be indicated. These files are tiff files and in the header of each Tiff files in data about the document or any attachments that may be required. I mentioned Jpeg images before and these are stored in number format, but end in JPG. These files are linked to their tiff files by the data in the headers and the tiff files contain no image info.

The reason the file structure is important is that to restore the data, it is imperative that the directory structure remain in tact. A simple undelete program might recover tons of images, but most recovery software examines the contents of the file and creates a file name of it's own and no directory structure is provided, so simply having the images is useless.

A second drive in the RAID failed causing the entire RAID to become useless. The original sporatic drive which was failing was thought to be in the Operating system and the thought at that time was that we'd obtain an image of the OS and when it failed entirely, we could restore the data image to a single drive. But it turns out that the original failed drive was in the data array.

Since the OS continued to work, the backup scheduler continued to work. This exacerbated the issue. The drive failed on Friday. No one contacted IT about the issue until Monday which meant that the Saturday full tape backup was run. The NTbackup software was configured to overwrite the existing file with the new backup. This resulted in all the data being lost and the main backup being lost as well.

The RAID was disassembled and images were made of the drives. An attempt was then made to re-create the RAID with software, but many problems such as no knowing the striping algorithm and drive header information prevented an easy rebuild. Two of the drives were accessible when assembled in restructuring environment, but no directory structure could be recovered.

We then located a data recovery company who would attempt to recover the data. If they were able to recover the data, then we'd pay. Otherwise no money would be involved and the amount as negligible anyway, so we chose to send the drives off for repair and then turn our attention to the backup drive.

The company we used was This company offers to recover the data from raids for $800 USD. You really have to examine the web site to determine the actual cost, but because these were SCSI drives, there was a 150% markup. There is also an additional fee for getting the drives back and supposed additional fee to get any additional information such as a file listing of the recovered files (very important to take advantage of this). Still the price to recover the data was way below other organizations that wanted the money up front and charged 10 to 100 times that amount.

They had a location in California also and I thought this was handy, but it turns out the company is actually in Canada and all the other locations are UPS drop off spots. I really confused the shipping process by showing up at the UPS drop off store. The worst part was that I couldn't get a tracking number as all the drives that were being sent to Canada were aggregated into a larger package and shipped in bulk. I called later and got the tracking number of the bulk shipment. was very prompt at getting back to me to get additional information, even though I was put off at having to answer the same question over and over, still there was almost daily communication which I was very impressed with. The only problem was when they said they had the data recovered, I asked for a partial directory listing which they were happy to send me for free. That was a major folly. We paid the 1300 and had the data sent to me and then found out that the partial listing they sent was the only 2gig of data that they got a structure listing of. In addition, all of the files were cross-hashed with bogus data as they had the stripping completely wrong. The remaining 100gig of data was all in one directory that had all made up names which was exactly the same as I would have gotten with my recovery program. offered to re-extract the data, but I was having pretty good luck with the backup drive so I ignored that because I didn't want to pay to get the data a second time only to find the data still not correct. In addition, they didn't have a external drive to save the data to so I'd either have to buy a drive or send them a drive to save the data to.

During the re-building phase, I had purchased Active UNDELETE 7 Enterprise which claimed the ability to rebuild and recover from RAIDs. The recovery process was very flakey and I had frequent problems with the program crashing. I determined that perhaps I should build a bootable CD as that is offered in the software, however; the CD is seriously gimped and did not offer any ability to rebuild the RAID, so I had to install the software on the server. After much work, I realized I didn't have enough information to rebuild that drive and there was no assistance in the program to help determine striping or parity. I eventually abandoned the software for rebuilding the RAID.

After I sent off the drives to, I thought I might be able to use Active UNDELETE 7 Enterprise to extract the lost BKF files on the backup drive. However, that also became a problem as Active UNDELETE 7 Enterprise does not have an image for BKF and though it has an option for upgrading it, apparently no one is adding other image detection thumbprints, so the money I paid for it was wasted.

I later found Handy Recovery from Softlogica. This had a thumb print for BKF files and found several clusters on the drive that were intact enough to give me files 20 and 30 gig in size. I used the evaluation to extract one or two larges files (you can only extract one a day during the evaluation, but the large files really made this worth it). After I had extracted the files, I couldn't find any software that would rebuild files from the corrupted BKF files until I ran across NTBKUP.exe on This page gave me the info I needed to recover the data. The designer of this package blows past a lot of the overhead of NTbackup and allows the contents to be read even if the headers are missing. It would not build the directory structure without the drive letter being in the file and I didn't have the drive letter in all of the clusters, but this worked for me.

I was able to extract the directory structures and the locations of each file by running NTBKUP in the verbose mode and redirect the output to a text file which I later manipulated to create the directories, then change into the directory and run the extract for the files that were inside that group.

There are some anomalies that I can't explain in the output of the NTBKUP file, but the recovery of data was over 95% of the data and the file structures.(see details)

Lessons learned:

1. Don't rely on only one backup device. Currently, I am rotating two external backup devices. And checking them for consistency and error.

2. Pay attention to failed drives. Of course, my resources are limited to what the owners will pay for and it always bothers me when my recommendations are ignored and then I later have to present to them an issue that could have been avoided.

3. Obtain the entire evidence of recovery rather than a portion. Of course, even if I had a complete listing of the directories, I couldn't be sure that the files were complete. couldn't inspect the files either, since with the extension being numbers, they couldn't tell the files were tiff files, though I had explained it. I think I might as well been speaking a foreign language as what I was telling them was unfamiliar to them.

Written by Leonard Rogers on Tuesday, December 21, 2010 | Comments (0)

Database conversion Postgresql

Posted at 10:09:14 PM in Installations (48)

I've been working on a postgresql version of Blog 9.0.  I'm in the testing phases now and will be looking at the other functions of this blog.

This has been an uphill course.  I manually converted all of the all of the database table and field names to lower case.  Postgresql is extremely case sensitive.  I didn't get all of the fields and found that as long as all the names are in lower case in the actual database, apparently the code can have whatever case structure it wants.  The ASP pages still find the correct tables and names.

I had a big issue with bigint and bigserial field types.  bigint is a 8 byte integer which gives an outrageous range on index keys.  I started out converting all of the field extractions from Int8 to cLng and that worked, but there was a lot of work to do to fix all of them.  I instead discovered that you can specify in ODBC to treat int8 as int4 which is the same as the integer used by MS SQL and visual basic.  After I made that change, I was able to view all of the screens with one exception.  I did not have a layout picked.  The database creation tool did not copy anything into that field and when I displayed the web pages, I got an error on invalid "/" or "" in query.  It turns out that I was submitting a null and as soon as I selected one of the valid templates, those errors went alway also.




Written by Leonard Rogers on Monday, December 20, 2010 | Comments (0)