Editor Note: This article was written by a guest author, Ilan Justh, a renowned IT Asset Manager and Software Licensing expert. Mr. Justh works with companies of all types and sizes, providing them with the advice and guidance they need to reduce risk, cut IT costs, boost productivity, and optimize the value of their technology systems.
What do you do when you have been tasked with finding all the needles in a BIG corporate haystack? You grab a strong magnet and get to work.
In 2004, I was hired as the first asset manager at a major art institute. My first task was to create a master CMDB of every technical asset owned. First, I had to determine what needed to be tracked. I met with the network team, a telecom representative, and my direct manager. We decided that peripheral devices like mice and microphones, as well as monitors, would not be included, since they were less expensive to replace than to repair if they broke down. That left only CPUs, printers, and select externals like scanners to inventory. While the telecom group opted out, the network team wanted us to log their servers, routers, switches, hubs, etc.
Some data did exist, but it was incomplete and inaccurate. However, it gave us an idea of how many computers we should expect to find, and how many desktop techs would be available to help. Luckily, our vendor contract included a clause that allowed us to run an annual inventory (I highly recommend this for anyone with a master contract, as long as the fees are not exorbitant).
Next, we needed to define what data should be collected as assets were found. Those working PCs that were connected to the network were already reporting their internal information (memory size and type, disk information, software applications and patching levels) via a software agent. This presented a HUGE time savings for us, since data would only need to be gathered from the few lab or non-networked units. We planned to accomplish this by creating survey floppies and memory sticks.
Presuming the data collected was accurate, we produced a CPU master list that included only Windows-based units. DOS, MAC, and UNIX machines would be manually added, due to limitations in our software inventory package. As stated earlier, assigned techs would boot up all stand-alone systems, and manually gather information using floppies and memory sticks. To avoid affecting end users, work was done over weekends with a four-person staff. We estimated it would take five minutes per machine (although many units would probably be completed faster), and hoped to avoid access issues by asking each affected area to clear away any toys, plants, and papers.
Since we already had an approximate count, I was able to create area sub-totals for the four survey sites. I was also able to create a master schedule/calendar, by determining how much work could be done (150 to 200 computers per day, 300 to 400 per weekend), and how to split that work up logically, to prevent traveling between buildings, or to and from sites outside home campus.
As the search began, each team leader received a master list that indicated what they would need to find in their region, in the allotted time. We also provided extra asset labels, in case they found items without tags, or tagged assets that weren’t in inventory. Of course, both were found (in this scenario, I recommend investigating further to determine if the items were purchased without following protocol, so similar situations can be prevented in the future). For the sake of speed and accessibility, bar codes were printed on each label, and teams were given handheld scanners. We even got a few mirrors with extension wands, to see behind devices that were difficult to move.
Senior management had already signed off on this project. But now, information had to pass down to departmental managers, who – since we would be visiting every machine they oversaw – needed to know dates, effort, intent, risk and fall back, as well as who they should contact with questions.
We also got approval for our notification message, which instructed each user to turn their machine off, and provide their BIOS logins, network IDs, and other passwords (in case we had to turn the unit on). Systems for which we were not able to gather passwords would be accessed as needed via system administration tools.
Because we would be “touching” every single machine, we considered embarking on other activities (such as security assessments of lab machines that house highly sensitive data), but determined that the scope was too great and the risk too high. We simply noted those machines for management, with the offer to help with risk and security in the future.
Equipment was scanned, or manually read when necessary. Items without tags got new barcode labels, and were logged on blank asset report sheets. If they were already on the master list, they were checked off. Secondary devices were noted as child assets to parent computers. Printers had their physical locations shown and their queue names noted (when network-connected). We also kept blueprint maps, which came in handy during a follow-up printer-related project.
Data was then logged into our CMDB, to keep it as up-to-date as possible. We concentrated on correcting discrepancies, and establishing the associations between child and parent devices.
Months later, the process was complete. We discovered over 6,100 units, with 80 devices still not found. Our next task was to locate them. We wondered if they had been disposed of. After reviewing reports from our data disposal firm, many missing items had, in fact, been removed – bringing the number of lost devices to 40 or 50. Next, we looked at information we already had about those units, such as ownership or location. Users were contacted, and luckily, many knew what happened. In most cases, the users disposed of them on their own, or stored them as emergency backups. Since stored PCs would still be considered “deployed”, and would count under our vendor contract, we informed the users that spares were already available for them. We then took the units back, and rendered them obsolete.
Now, we had just about a dozen missing units. We visited technician work areas, and discovered old machines stashed away. We marked these units as “stored”, rather then “deployed”.
My last attempt to find these missing items raised a few eyebrows! I performed a Google desktop search, looking through email archives and network storage. Be careful if you attempt this, there are privacy and security issues involved, and it can create a lot of network traffic. Sure enough, I learned that a few units had been taken by former employees, and one printer had been borrowed by a publicist in New York.
Ultimately, we were left with just seven lost devices. Fortunately, none of these were considered valuable – one was an ancient printer, and the rest were 15-inch monitors.
In the end, we created a solid baseline, ensuring that future inventory changes would be compared against an accurate database. We learned how many computers existed, and reduced our support contract. We identified all printers, so we could make smarter deployment decisions. And, we counted all notebooks and desktops, which improved our ability to track implementation and assess risk.
The data center team was amazed by the help it provided. Since they knew nothing about their servers (location, software, etc.), the advantages of our work were hard to quantify. We knew it would benefit backups, risk assessment, load balancing, contracts, and usage, but the dollar value of the knowledge obtained was something they had to see for themselves. Although it was scary that they didn’t already have much of the information we delivered, the manager was extremely thankful. And me? I won an award for the successful completion of this project!