Vertical Market
Here is an excellent article wiki. Vertical market software is any software that is written for a specific industry for a specific task. Generally, these handle the sales or operations side of business. Those that deal with sales sometimes try to provide a full accounting package, but often fail when those packages don't integrate with the CPAs software. Those that don't usually try to provide some kind of export of sales that can be imported into other accounting software.
Some examples:
Fostertrak - software for managing Foster child placement. The primary functions of this software is to track the child, the family they are placed with, registered families and available beds, all the medical and registration requirements as well as the county assignments. I'm not sure if FosterTrak is used anywhere but in California. The accounting interface as far as I can tell only tracks what the county is due to pay the Foster care agency.
MaintStar - That may not be the name of the software anymore, but it was designed to track maintenance requirements for equipment in any organization that runs a preventative maintenance program. This program is strictly internal and has no billing associated with it.
Cargowise - Software for managing exports. It manages the paper work involved with customs and tariffs and tracks the product from Shipper through the shipping facility to consignee through customs, door to door, port to port. There is massive overhead involved with the paper work and all the hands involved with shipping overseas and this program helps manage that. It has an integrated billing system which produces an invoice and attempts to manage the billing, but it is very complicated. With that complication in mind most customers don't want to use it for managing orders for office supplies even though Cargowise claims they can do that within their software. They also claim to be able to export the invoicing to external packages, but I don't know of anyone actually doing that.
Accounting software can also be considered vertical market, but there are two or three packages which can be purchased off the shelf and then customized to fit most small businesses. These are NOT vertical market software: Peachtree (Sage) and QuickBooks.
Note: Sage software made a move to eliminate the Peachtree name and now calls their new software which is still Peachtree, Sage 50. I'm not sure if that's a move to remove the software from retail shelves.
Generally, you can tell you are involved with vertical market software when you can't find any pricing information and can't buy it without a salesman. Sage took over another accounting software called ACCPAC which was written to be customizable in Visual Foxpro. Every part of the software needed to be quotes individually and then the user licenses. Salesmen for this software were involved directly with programmers that could customize the accounting software to meet the end users needs. The problem with any customized and often any vertical market software is you can never get rid of the programmer which comes at a premium. My last involvement with ACCPAC was to install modules for EDI and interface the Royalty software (also vertical market) so the thousands of royalty checks that had to be written did not have to be re-entered by hand.
Written by Leonard Rogers on Saturday, August 31, 2013 | Comments (0)
What is a Brute Force password attack
I'm writing this because of the frequent occurrences I find in my work where users still create easy to guess passwords. The reason the password works for so long is because no one cared to crack most of those accounts before, but we are seeing more and more email accounts hacked and the address list of the user stolen. Anyone's email address account is a valuable asset to these hackers because they can deliver targeted emails from a sender to his friends that he keeps on his address list. There's no more guess work and the hackers are not simply scraping web sites for email addresses anymore. What's even more concerning is that most of those people on the address list will create white lists to allow the sender's email to reach them.
So how do the hackers do it? Brute Force password attacks are simply computer programs or scripts that guess passwords starting with the easiest to guess. The scripts are smart, they know if a system is designed to lock accounts after a certain number of failed attempts, but a lot of small domains often don't have counters like this working, so the Brute Force attack can continue for as long as they want. Frequently, they don't have to try for very long, because the passwords people use are easy to guess. Generally, the guessing starts with passwords like password and 1234 and 12345 and password123, then they move to using passwords that are in the email address like jerry@cox.net they will try a password like jerry, jerry1, jerry2 and keep counting.
The other interesting thing is people really like to have ONE password for everything. Problem is... once a hacker finds a password, that password goes in a database and they use it on other accounts before they start randomly guessing. But random guessing isn't as effective as sequentially running up every word in the dictionary and then obfuscating those words and then concatenating two common words together. For example. The owner of The Doughnut shop has an account at cox.net. His address doesn't look like it's any part of his business, but his password is thedoughnut5678? That's not an actual account, but you get the idea.
I've seen these attacks on my servers where they come from different IP addresses (computers on the internet) but attack the same account. It make it difficult to block them based on the computer they are using. I've seen the different guesses where they try over and over again and then stop for a long while. I've even seen that they have tried to find email accounts by guessing in the same manner hoping they will find a hole. And they do find the holes, obviously...
So, Brute Force, doesn't mean anything other than constantly banging at the door until they manage to get it open and as long as your accounts are connected to the internet, they can bang away for as long as they want because it isn't a person doing it, it's an organization. They share information and write programs that do all the boring work and then send little virus' that allow them to access other computers that allow them to remain petty much anonymous.
So, what do we do about it?
I use this site to create passwords that are random, long and completely unguessable, leaving the hackers to go completely random. And i use a little black book, something I never wanted to do, but I can't remember one of the passwords generated by this site, much less all the different ones. For those who are more techy, you might try looking into 2 Step Verification for those sites that allow it. See Google's implementation for example.
Written by Leonard Rogers on Monday, August 26, 2013 | Comments (0)
Creating an SSH key for Putty
I thought the process for creating an SSH certificate would require SSL as well, but it doesn't. Creating a key for Key Based SSH logins is all performed with the tools associated with Putty. Here is a good step by step tutorial for creating the certificate and a means of preventing logins through regular passwords. This will help prevent Brute force login attempts, however; as the author notes, if you lose your keys, you will not be able to login.
I don't know if disabling logins also disables the console, but I don't have access to my server to test it. The article is in four parts, hyperlinked at the bottom of each article. After creating the initial key pairs (public and private) you can copy the private key to any other machine that needs access to the server. You do not have to generate a new key pair unless the login for the new person does not have access to the public key on the server.
I would suggest creating a ridiculously long password using this site. Instead of disabling logins altogether. This leaves that door open of course, but creating a 32 to 128 character password and changing it as the need arises should make it very very difficult for even a brute force attack.
Written by Leonard Rogers on Monday, August 26, 2013 | Comments (0)
Bucardo replication and PostgresSQL some lessons learned
It was time to start the replication services on my server. After reviewing the different packages, Burcardo was the one I chose.
I followed the steps for the most part on bucardo.org. The site seems to be out-dated. From what I could tell, it hasn't been updated since 2009, however; if you look at the links to the discussion pages, you can see the developer is very active as are the users. I believe his/she is mostly involved with development of the software and not as concerned with the web's content. It's nice that this application is free. I certainly takes a lot of the developer's time.
There are many comments about Bucardo 5, but I wasn't able to find any links to it. The version of Bucardo I installed was 4.99, so I must be close. The documentation also isn't that great. For example, herd has been replaced by relgroup, but the documentation uses both terms and after some search, I found that relgroup is herd, but many internal parts still use herd. I'm not sure if that means that herd can be used instead of relgroup or if relgroup cannot be used for all commands.
I could not find anything on conflict strategy on the web site. bucardo_ctl also seems to be missing in my version or has been replaced with just bucardo. All the commands on the web site that are used with bucardo_ctl works with bucardo on my system. However, I have a Ubuntu version 12.0.3T on another machine that I installed the same way as I did this version of Ubuntu 13.x.x and the Ubuntu version 12 does have the bucardo_ctl.
Back to conflict strategy. When doing an bucardo add sync, you can see the options allow for a conflict_strategy option with it's various options, but you cannot find anything on it on the web site. I didn't put in the conflict strategy when I created my sync so I tried to update the sync with the bucardo update sync yada yada, but everything I entered there only gave me an error message without any clue as to what was missing.
No matter what options you supply, this is the message you get every time:
root@db:~# bucardo update sync
update:
bucardo update <type> <name> <parameters>
Updates a Bucardo object. The `type' specifies the type of object to update, while the `name' should be the name of the object. The supported parameters for each type are the same as those for the add entry elsewhere in this document. The supported types are:
`db'
`dbgroup'
`sync'
`table'
`sequence'
The internal help or error system helps with the type, which is correct, but no help on the name or parameters. This is the command line I used, which based on the bucardo add sync options should have worked...
bucardo update sync first conflict_strategy="bucardo_source"
I tried without the quotes also, both resulted in the same error message. Access to the database bucardo maintains these settings in is available, so I intend to add other sync's and see what the field conflict strategy gets updated to and then manually correct the tables for those that I can't update.
The other frustrating thing is that the documentation states that the dbs, which identify the data sources, specify that the first database will be the source and the rest will be the targets. This did not work for me. In fact, I wasn't able to get it to work correctly until I specified each database and how I wanted it to process.
The sample on the web site confused me as it appeared that they created the two test databases on the same server (they probably did, but I wondered who would want to replicate data on the same server since the object of replication is to have a backup or load balancing [which is an option for discussion all by itself]). I also misunderstood that bucardo needs to be installed on all the servers replication is to take place on, also not true. In fact, bucardo doesn't need to be installed on any of the servers master or slave.
I'm not sure why my installation didn't follow the documentation. I made a backup of my database because I wasn't sure what was going to happen and it was a good thing I did too. The server my source database is on is the same server I installed bucardo on. So the connection I made to my source database was using localhost. The second database was at another location which I specified with the FQDN. This was the first concept that I had to get straight in my head.
Since bucardo only needs to be installed on one server (though technically it can be installed on other servers and I suppose that would be necessary for chaining to work, i.e. the slave database could also be the master for another database and right now, with my limited understanding, the only way I can see that happening is if bucardo was installed on the slave with a identifying that machine as the source and another as the target), all the database connections need to be identified on that server. Each database that will be involved is specified in the bucardo add db command line which provides the connection string information, whether it be on this server or another server.
The documentation does say that on slave servers, only the bucardo user (super user) needs to be on the target machines. When I setup my first trial, I used the bucardo user, but later installs, I used the actual user that connects to that database and it has worked fine.
The description of the db <name> and dbname <name> didn't really explain to me what I was doing. It lead me to believe that the names for the same database in the different servers would be different, but that makes no sense at all. Those database names have to be the same if you're going to be using them for load balancing. You cannot specify in the same application that when you connect to this server, use this name or that server use that name, doing places the load balancing entirely on the software. Might as well also have the software also handle the replication. But the real problem is in the bucardo database. You have to have a separate name for each database even though they are named the same or you can't specify source and target of the same name. The misconception of mine came from the idea that bucardo would be talking directly to the other server's bucardo installation, but that's not the way bucardo works.
A good naming convention for dbnames is to specify the master and the slave. So, if I were doing a database on server1 as the master and database on server2 as slave and the name of both databases was store. I'd create two database entries in my bucardo table.
bucardo add db store_master dbname="store" host="localhost" port="5432" user="store" pass="password"
bucardo add db store_slave dbname="store" host="remote.store.com" port="5432" user="store" pass="password"
now the add sync uses the db names I just gave those two identically named databases as store_master, store_slave instead of store or localhost and remote.store.com.
I think it would be helpful to store the connections in a separate database in bucardo and identify the databases in those connections as a separate collection. I'm not sure if the current version I'm using will allow for that.
Now the problem I had, using the above info, I did my add sync like this
bucardo add sync first relgroup=first dbs=store_master,store_slave
The relgroup is another story altogether, but lets just say relgroup first was already defined as having all the tables in the database.
You'd think, based on the documentation that store_master was the source and store_slave was the target, but that's not what happened. According to the documentation, the first dbs is the source and the second and remaining dbs are the targets unless you override it with :source or :target.
When I tried to push the data with a one time copy to the target, it was actually pushing from what I intended to be the target. The result was adding a record to the actual source database caused an unique index error on the primary key. The settings can be reviewed by issuing the bucardo list sync command. this will show all the dbs being synchronized and which is the source and which is the target. When I flipped the source and target around opposite what the documentation said to do, it still made my source the target, so I had to specify exactly what I wanted:
bucardo add sync first relgroup=first dbs=store_master:source,store_slave:target
Then and only then did it correctly assign the source and target. Best practice before activating the sync is to check it. Not being specific, leaves the system to willie nillie make source and target assignments.
Another thing I noted is the source database gets another schema called bucardo, which is probably why the bucardo user needs to be a super user (just guessing though. I didn't provide the credentials for Bucardo in the connection string). The that schema has all the triggers and databases (delta) needed to perform the synchronizations. The delta tables fill up when you add or delete or update records in the main table. Then the delta tables are emptied as the updates are completed in the target tables.
About relgroups... I created my bucardo dbs but using the addalltables and addallsequences which didn't create the relgroups as I thought it would. These should have been added with relgroups with the same name as the db. I had to go find the information and create the relgroups manually.
The table names also need the schema they are in. So, if your tables are in the public schema which most are, then you'd add the tables and sequences as add table=public.table1. The documentation doesn't show this in the example provided
I have only added one database. I have several on the same server, but I wanted to make sure that the updates worked as expected and that one database has been updating fine.
One observation: If this is to be used for load balancing, it can only be used for lookups. Any program would have to take careful consideration when assigning the database connection for updates, adds and deletes to the main database and look ups to a connection that could pull from either database, or only the slaves. There are areas that will need to possibly split the difference and if the main is down, simply skip over, such as when a client logs in. I the credentials and if he successfully logs in, I update his record with the date and time of the login. After considering this a little bit more... the web site depends on the data to even function, but there's no reason to add more content when the main database is down. However, information that can't be viewed unless the user is logged in won't be available.
Written by Leonard Rogers on Thursday, August 1, 2013 | Comments (0)