Microsoft has issued a warning1 to many Windows® users that a new vulnerability in Windows® Remote Desktop Services (RDS) (also known as Terminal Services) has been discovered for many Windows® Operating Systems which requires no user interaction to lead to a security breach . To clarify this means if you are running on one of these Operating Systems, it has Remote Desktop enabled, and it can be remotely logged into using Remote Desktop Protocol without first logging into a Virtual Private Network (VPN), it may mean it could become infected without the user doing anything at all.The affected Operating Systems are listed below:
Windows Server® 2008
WindowsServer® 2008 R2
It has been reported that “potentially millions of machines are still vulnerable.” 2 This particular vulnerability is so widespread and potentially dangerous that Microsoft has released special Out of Band patches for Windows® XP and Windows Server® 2003.
Microsoft Windows® Patches for the BlueKeep Vulnerability
Windows® XP / Windows Server® 2003 – Security Patch KB4500331 (this patch must manually be downloaded from Microsoft and installed)
Windows® Vista / Windows Server® 2008 – Security Patch KB4499180 (this patch must manually be downloaded from Microsoft and installed) OR Monthly Rollup KB4499149 (this patch is available through Windows® Automatic Update)
Windows® 7 / Windows Server® 2008 R2 – Security Patch KB4499175 (this patch must manually be downloaded from Microsoft and installed) OR Monthly Rollup KB4499164 (this patch is available through Windows® Automatic Update)
Some IT administrators may respond that even though they may have a computer which has one of the affected Windows® Operating Systems, that it does not have Remote Desktop Services enabled, or it requires a VPN to connect to the network before the system can be connected to with RDS so the system is not vulnerable.
Securing the perimeter of your network is important but not installing the latest security patches on computers in the company’s network can produce devastating results if a malicious actor can defeat the perimeter security. We encourage you to run supported Operating Systems with the latest patches regardless of your current network topology. We recommend using a tiered security approach which secures not only your network perimeter but uses network segmentation, running supported Operating Systems, installing current security patches, deploying internal network monitoring and security controls, and employs Role Based Access Controls (RBAC) among other security best practices.
Other resources of information about BlueKeep include:
Discovered in 2017, and publicized in 2018, Spectre and Meltdown are two new vulnerabilities in how certain microchips were designed.1,2
These vulnerabilities place information stored in memory (e.g. passwords, email, web browsing information, documents, etc.) at risk of theft.3
For Spectre to be exploited, a device must have a vulnerable processor. Security researchers have verified Spectre can be exploited “on Intel, AMD, and ARM processors.”4
For Meltdown to be exploited, a device (laptop, desktop, server, smartphone, etc.) must have a vulnerable processor and the Operating System (OS) running on that device must be unpatched. While not all of the details are currently known, security researchers have verified that many Intel processors are vulnerable.5
Because the vulnerabilities lie in the processors, a complete fix which does not incur a degradation in system performance may rely on the processors being redesigned.6,7,8
IT administrators should not wait to do something about this. Many companies including Microsoft and Apple are releasing software updates to help patch these vulnerabilities.9,10
A number of hardware vendors are releasing firmware updates (including but not limited to BIOS updates). Updating firmware (i.e. micro code) is a step necessary to mitigate the risk of Spectre or Meltdown being exploited and a systems best practice in that systems should be updated with the most recent release (production) security updates.11
It is important to note, that using the wrong BIOS or firmware update for your hardware may result in the hardware becoming unusable.12
Additionally, if the device loses power during a BIOS of firmware update your hardware may become unusable.13,14
Each hardware, OS, and software vendor is responsible for providing their own patch. It has been reported that some updates may slow down device performance.15
Microsoft has released patches, but in order for your computer to see those patches it must have a supported anti-virus product installed and that supported anti-virus must create a special marker for Microsoft to confirm that your anti-virus will support the new Microsoft patches. If the special marker does not exist, “Customers will not receive the January 2018 security updates (or any subsequent security updates) and will not be protected from security vulnerabilities.”17
According to one security researcher, here is a list of anti-virus products that have updates to protect against one or both of these vulnerabilities but do not as of 8 January 2018, automatically create the special marker.18
Imagine losing $100,000,000 in revenue in two days: 1/10th of a billion dollars gone in two business days. This was the reality for Delta Airlines in September of 2016, when a loss of power shut down many of their servers, causing thousands of flight delays. Everyone enjoys using the term “crash” when referring to basic program and process failures, but do not often convey the impact that crashes can have on a company. Expanding on this; companies that are not prepared with backups and continuity solutions are risking hemorrhaging resources like money and time the entire time their network is down.
One of the contributing factors to “crash” being such an overused term is that fact that a crash can be caused by many different things, and can come from both internal and external sources. A crash is, at its’ basics; an unwanted and sudden shutdown or cessation of function by a program or process. This can be cause by many different core issues, but amongst the most common would be information overload and hardware failure. Information overload is when too much information is attempted to be processed by the program or process and consequently the demand exceeds the capability of the software, causing a crash. Hardware crashes are more diverse, being caused by a variety of physical or mechanical failures that can cause the software logic to conflict with itself or trigger emergency shutdown procedures within the program itself. These can be caused by simple pre-existing conditions within the computer such as trying to run a program that has higher demands than your network can meet. However not all process and program failures stem from crashes; the recent “WannaCry” malware if present, can lock your files away, threatening their deletion for ransom, leading to a similar situation as a crash.
Why does network stability/continuity matter?
What truly makes a crash dangerous is its’ potential to “go down with the ship”. It is possible that on a computer network, if a key component or program fails and crashes, it could take the network with it; one server crashing has the capability to make a network unusable from a business perspective, costing time, and a large sum of money. As previously mentioned, in September, 2016, Delta Airlines had a physical hardware failure that caused a power outage at their Atlanta facility. Not all the servers within had backup which led to a massive data loss. This caused flights to be delayed, which meant that flight crews went overtime and had to clock out as per federal limitations, meaning flights were delayed even longer to replace flight crews, which meant passengers were in some case waiting days for their flights. Vouchers were offered to appease many of these passengers, but by time all had been said and done, Delta reported they lost over $100,000,000 in revenue all within a few days.
How can I protect my data?
The act of protecting your sensitive data from these situations is often referred to as “data continuity” or “business continuity”. The idea is that if the worst should come and your data is the victim of a crash or attack, it can be recovered quickly and effectively. There are a few ways to go about this, from keeping up-to-date backups, to having copies of your data present at off-site or off-network locations that wouldn’t be affected.
NOTE: This exercise is to gain an understanding of what a forensic image is, and how they are created. We strongly recommend that you contact a certified forensic examiner to create images that will be introduced as evidence.
Not long ago, I was speaking with an attorney over a case which involved dates of creation and dates of access. They told me how concerned they were that when they copied these files to a flash drive, all of the creation dates changed to the day that they copied them! What could possibly be wrong?
A simple misunderstanding of how to acquire information was all that was at stake. Well, that and the case. For anyone needing to preserve the state of information of, say a hard drive, it is important to seek the assistance of a certified forensic examiner. They will be able to make an accurate bit-by-bit “image” of the data source, so that it can be referenced, viewed, extracted, etc., without the risk of altering the state of the data. Now some of you might wonder why I put the word Image in quotes. This is a term of art. Many attorneys refer to an image as a photograph, or a graphic. In the scope of forensic acquisition, it means a bit-by-bit duplicate of the media, created is such a way that it can be verified, and not altered.
Can anyone make a forensic image? Well, this part isn’t very difficult. But you do want to be certain that you document what you are doing, and can explain how you were able to authenticate that the image was correctly produced. At the very least, it is an EXCELLENT exercise for an attorney to do a forensic acquisition so that when you have to speak with an examiner, you will have more of an idea of what they are going to do for you and your case.
First, you will need forensic acquisition software. Not to fear, it is free from my friends at Access Data. The link for the Windows version that is current as of this writing is here: http://accessdata.com/product-download/digital-forensics/ftk-imager-version-3.4.3 . Once it is downloaded, go ahead and install it. It is quite small. I’ll wait.
The next thing we will need are two flash drives. Smaller is better for our example. One however should be slightly larger than the other. So a 1GB and a 2GB flash drive would be great. (It is important that you use different size flash drives – your destination should always be larger than your source) Format the larger of the two devices (your destination media) so that there is no data on it. (A forensic examiner would do a ‘wipe’ to make certain that the media is completely erased before beginning, but that is not necessary for this exercise). Take the 1GB flash drive (this will become our SOURCE media), and copy some files from your computer onto the drive. Browse the flash drive to make certain that your newly copied files made it safely to our source media. Next we start FTK imager. Once you start FTK, Your screen should look like this:
At the top of the screen, on the left side are two small green icons. The first one allows us to pick a single device that we want to image. When you click that, a screen will pop up to ask you what it is that you want to create a forensic image of. In this instance we want to take an image of a PHYSICAL DRIVE. Select that. Your screen should look like this:
Click NEXT You will now be presented with a drop down box asking you WHICH physical drive you want to image. Remember when I told you to use different sized drives? The drop down box identifies the devices that are currently attached to your computer. Since FTK doesn’t recognize drive letters here, you should pick the device that is the size of your source media. In this image you can see that I have two devices attached to my computer: my hard drive, and the 1GB flash drive:
When you click finish at the bottom of the screen, your source drive should be listed on the left hand side of the screen in FTK.
Now it is time to create our forensic image. While leaving the source drive plugged into your computer, now add your DESTINATION flash drive. PLEASE be careful at this juncture to select the correct drives – we don’t want you to overwrite something important. Right click on the drive that is in the evidence tree. Using the above example, you would right-click on \\PHYSICAL DRIVE1. A small menu should pop up – please select EXPORT DISK IMAGE.
So far, so good. We aren’t done yet though …
When you click the EXPORT DISK IMAGE menu item, you will get a screen asking for the DESTINATION MEDIA information. It should look like this:
Please take care to tick the box at the bottom that says “VERIFY IMAGES AFTER THEY ARE CREATED”. This is of paramount importance. Then click the ADD button. You will be asked what type of image to create. These are different formats that are readable by different systems. The most universally accepted are DD and E01 images. You should not concern yourself with the other two types at this time. Just so we can all be on the same page, please select E01 and click NEXT. On this screen you can identify the information relevant to your case. None of this is mandatory, but it is all a really good idea. Go ahead and populate this information – you will see why in a few minutes. When you are ready click NEXT for the image destination screen.
First let’s click the BROWSE button, and find the DESTINATION flash drive that you plugged in. (Note, there shouldn’t be any files on it – if there ARE files, you either did not format the drive, or you have selected the WRONG drive. So, using my example, my destination flash drive is drive Y and the image filename I have chosen is “DemoImage”.
For the purposes of this exercise, we won’t go onto the other settings on this page. After you have these items properly populated, then click FINISH. Now you are returned to the CREATE IMAGE screen. Since we have no more source media to add, double-check that the box is ticked at the bottom that says “Verify images after they are created”, and click START. Since the source media is only 1GB in size, this will only take less than 5 minutes to create the image and to verify it. When the process is finished you will see “Image Created Successfully” in the STATUS field of the progress box. A new box should have popped up on your screen that says “Drive/Image Verify Results”
Mine looks like this:
This is a really important screen. When you see the word HASH, this is another term of art. It is a method of positively identifying a file, folder, or drive, so that it can be verified that it has not been altered. FTK Imager calculated two different types of HASH before it imaged your source drive. After it completed the process, it calculated those HASHES again, and they both matched. THAT means that you have authenticated your image and can be certain that it is an accurate representation of the source drive. If anyone were to alter anything it this image, even a comma, the HASH that would be calculated would NOT match. So, you have successfully created your first forensic image of a drive. Congratulations! Now …. What can you do with it? Lets go ahead and close the FTK windows that are up. Let’s pretend that an attorney gave you this destination drive with the image on it for you to examine.
When you look at the drive itself, you will see lots of files that have the same filename, but a different extension. You can’t use Word or Excel, or notepad to read this. What can you use? FTK Imager. FTK Imager will not only CREATE images, it will also READ them. Start FTK imager again. Click the little green icon on the left to Add Evidence Item. This time when it asks the source type, select IMAGE FILE. Click next and browse to the image that you created on your destination flash drive. My screen looks like this:
Click on the DemoImage.E01 file. Hey! There are TWO of those. Well, not really. One it a TEXT file that will have the case information and the hash information of the image, and the other one is the E01 file that you created. Note the extension difference in the TYPE column. Select the E01 file named DemoImage.E01, then click OPEN, and FINISH.
You have NOW opened your forensic image of the source media that you created. In the column on the left, you will see the file DemoImage in the Evidence Window. If you click the + sign next to the items in the list, you will drill down to the files that are on your source device.
The next article will talk about all the things you can see in an image that you may not be able to see on the source media.
The modern shopping center is a crowded experience with a lot of money exchanging hands. Unfortunately, with so many people out an about, identity theft becomes a real concern. Identity theft is a problem that shoppers seem to put off, many of them with the “it won’t happen to me mentality.”
THIS MENTALITY IS A TRAP.
Complacency is never an option when it comes to identity theft, and with online shopping becoming more and more popular, it’s easier than ever for the ethically loose to obtain personal information from unsuspecting victims.
How Do I Protect Myself from Identity Theft?
Identity and information theft preys upon the unprepared and uncaring, but can be made much less problematic with a few simple preventative measures we at Micro Systems urge people to take.
Never shop on an unsecured network. This is a simple one, but many people don’t realize the associated danger. Local shops and cafes that offer free public wifi are often unsecured networks. What this means is that anyone using a signal interceptor can obtain any information people on that network type in. Which often includes banking information and email passwords.
Never use a Debit Card When Shopping Online. The problem with debit cards versus credit, is that debit cards are directly connected to your bank account, and are more difficult to dispute purchases on. If someone has your debit card, they have your bank account.
Keep your Information Close. Information attacks are going to increase during this season, so being a little more careful about who you give your information to is a reasonable precaution.
Invest in an RFID Blocker. These can be small cards or sleeves often inserted into your wallet. What these do is they block the scans taken by a skimmer; a device people use to obtain credit and debit card information simply by having it near your pocket. Having one of these can mean much more peace of mind when in busy shopping centers.
Keep Informed. Stay up to date about cyber attacks so you can avoid any websites or locales known for being identity theft hotspots.
Complex and Often Changing Passwords. This is something people should do year-round but if an excuse is needed, the holiday shopping season will do. A simple change like adding a numerical sequence and random capitalizations can make a password much more difficult to crack. (Ex. “password” -> “12pAsSwOrD34”). Changing your passwords even on a monthly basis can also increase your personal security.
These are just a handful of strategies to protect your personal information and prevent people from obtaining what is most important to you.
As we progress in our technological age, so too does the role of technology in our spending habits; and technology is becoming a rapidly larger part of the Holiday shopping season (Cyber Monday is an excellent example). Due to this, a very common question received by IT professionals is this:
“What computer should I purchase?”
A daunting question for individuals and businesses alike, finding the right computer is an important step in a large investment. Computers fill an ever increasing role in our society, and (especially for businesses) can greatly influence the amount of accessibility a person has. This in mind, it is important to know how to decide effectively on a new machine.
What makes computers different?
Undoubtedly, you’ve heard of the “Mac V PC” argument when it comes to deciding on a computer purchase, when in reality you have many more options than this. Different machines built by different companies are made to do different things and there are many choices. This means (usually) computers aren’t objectively better or worse than their competitors, they just do different things. For instance, to use the Mac and PC example, Macs traditionally have powerful graphics processors, high-quality displays, and a more streamlined interface, making them excellent for roles amongst artists, musicians, and entertainment facets. PCs on the other hand will typically have excellent central processors, more efficient batteries, and are a little easier to code, making them the superior choice for technical work such as you might find in an IT environment or law firm. As stated earlier, you have much more than two choices, as there are hundreds of PC and Mac models alone.
Hardware isn’t everything though, so you’ll also need to consider software. This means considering things like “What programs can I run?”, and “Are my programs updated?” After all, not all computers can run the same software. You’d also be considering things like antivirus options, operating systems, word processors, and so on. These can drastically change the experience you have with said hardware, as a new OS can make or break the user experience for new devices.
What Machine Is Right for Me and My Company?
Only you can decide the machine that is right for you; for businesses, it is common for PCs to be recommended, we frequently suggest Dell Personal Computers to our clients. However, business clients should consider other technology as well to ensure the smoothest experience. Other devices common are telephone systems, web and email filters, and cameras. When attempting to locate the right solution for your personal or business use, context is truly everything, so remember to always consult with your IT professional prior to making any infrastructure changes.
Information Technology companies and departments alike have always been plagued by a stigma; that if you need to call them, there is something seriously wrong with your network. It’s a bit like getting called to the principal’s office, and this feeling of trepidation is largely caused by a fear most technology companies experience, one that is quite validated.
No. Network. Is. Safe.
In the field of technology, it is an unpleasant and an inescapable fact. Security is of the utmost importance in modern technology and it is something often ignored because nobody wants to deal with it. But it is imperative that anyone working in this field not only understand how to safeguard their own network, but to understand the function and goals of malicious software (“malware”) that are designed to do harm to your network.
How Do Malicious Programs work?
An important step in understanding the function of these programs is to know that they are simply that-programs. On a conceptual level, a virus or malware program is not much different from any other program, except that it has outcomes that you do not want. Such software is designed to either damage, control, analyze, or influence the hardware or operating system that it targets. This can range from anything to encrypting files while awaiting a ransom to transmitting all the data from the target machine to a third party. These programs have a variety of sources, including but not limited to criminal corporations operating outside the purview of the law, single programmers attempting to make a quick buck, or the always infamous extremist group. When it comes to prevention, the source is not as important; what does matter is that attacks and infections on a network can be the single most costly issue a company will face. If a network suffers, for instance, a ransomware attack, no files, accounts, or data can be accessed on that network until the ransom is payed, and even then the data may still remain encrypted depending on the whim of the attacker.
How Can Malicious Programs affect my network?
There is an abundance of malicious software variations, due to the fact that these are as previously mentioned, simply programs, and thus can be unique in function and purpose, but for brevity’s sake we will cover some of the most important types of these programs. A relatively simple and common program is a trojan. A trojan’s purpose is reflective of its’ namesake, in that it pretends to be a legitimate or crucial piece of software to trick the user into downloading it, and upon installation hides itself inside the local files of the and then unleashes its’ “troops”. That is to say, it begins to do what it was designed to. This can mean everything from copying data, to deleting it. A new(er) type of malware that’s been making rounds lately is malvertising-(you can read our previous TechBits article on malvertising to get a much more in-depth description). Suffice it to say that malvertising uses internet ads to infect the target machine. Ransomware is software that encrypts all the data on a network and holds the de-encryption key for a ransom, though on occasion even paying the ransom will not coax the attacker into providing the de-encryption key. Though it’s important to know these types of malware, there are countless variants, and the variants are increasing at an alarming rate.
What Can I Do?
When people think of malware they often feel that they are safe with a single antivirus, firewall, or (and this will make your IT cringe) having a Mac because Apple products “don’t get viruses” (yes, they do). Whereas this can be enough for personal devices on a home network, the modern business cannot afford to use only a single source of malware protection. The most secure networks have layers upon layers of security and are very difficult to break through. On a more practical level, it is typically acceptable to have two layers: one passive one active. An “active” layer of protection would be like the anti-virus you are probably familiar with, something to actively scan files in your network to locate and quarantine dangerous programs until they can be properly disposed of. Passive protection is a little different. An example of passive technology would be a web filter. The Web Filter doesn’t necessarily actively search and root out malicious programs, but rather acts like a sieve and prevents many malicious programs from coming into contact with your network in the first place. Another source of protection that should be mentioned is Web Application Filters. Web Application Filters, or WAFs, monitor attempts from outside your network to gain access through applications that are Internet Facing (Such as web-based email, or self-hosted websites. It is not uncommon to see thousands of attempts per day of malicious actors attempting to gain access to a protected system through a web-based application.
A question anyone with an IT background has been asked at some point (and probably more than once) is this:
“What antivirus should I get?”
It’s an excellent question, there are many, many options for anti-virus/anti-malware software, some are free some are paid. An adage to consider is that “you get what you pay for” – we like to add the codicil, “if you are lucky” at the end. One option that we at Micro Systems currently suggest is WebRoot, which is a comprehensive anti-virus software that we often combine with the added protection of the commercial version of MalwareBytes. However, at the end of the day the choice for antivirus and malware protection will largely depend on your unique network environment.
To forget things you’ve learned is natural for us illogical humans, but what about computers? How exactly does a computer remember? Many people don’t realize that there are actually multiple different types of computer memory and they all play a different role in data storage and retrieval. As a consumer/business owner, it is imperative to know the difference between these two, and when they might need replaced. When it comes to computer memory, there’s no real short answer, so best to view the topic as a whole.
How Does Computer Memory Work?
Computer memory is tricky because it works less like our own memory and more like writing something down. The type of computer memory in this analogy is the material you’re writing on-sand or paper. There is two kinds of memory in a computer: volatile and nonvolatile. Volatile memory is like writing in sand; it’s there to be easily and readily accessed by your computer to make things faster, but the information is lost as soon as power is lost, like waves washing it away. Nonvolatile memory is more what people encounter when speaking of memory – it’s like writing on paper, its permanent. So if we have nonvolatile memory that never erases unless deleted, why do we have volatile memory? The purpose of volatile memory on your computer memory is to keep it readily at hand if the information is needed. It contains information like browser cookies, auto-fill, and temporary files. This decreases processing time these items would usually take up, since the computer can access its’ volatile memory to access them instead of having to download them from their original source. No doubt you’ve heard the term “RAM” in reference to computer storage, most people know that the more RAM you have-the faster the computer right? This is partially true, as RAM is the source of the volatile memory that ceases to be when your computer is turned off, so the more information your computer can temporarily hold, the faster it can potentially run. You might notice that if you leave your computer running without shutting down or losing power for extended periods of time it runs slower; this is because your available RAM is lower than it should be, since its’ been accumulating data without shutting down. It should be mentioned, however, that RAM is only half the story when it comes to the speed of your device-you should always be sure to know how much RAM your device can support at maximum.
How Can Computer Memory Affect My Company?
This is a topic many companies seem to brush to the sidelines and in reality, is something you as a business owner will want to pay close attention to. When it comes to your storage (that’s your non-volatile memory) running out of this means pretty effectively ending whatever functions your computers handle. With no space for new information, you will stop receiving email, lose the ability to save files, will be unable to download items from the internet, and you run the risk of having your main servers crash-one of the worst things that can happen to a business computer network. The importance of keeping track of your memory usage cannot be stressed enough in a business environment. It’s also important to keep an eye on RAM and volatile memory, which can cause decreased performance when low, though this is less often a problem. Luckily, there is a simple solution when it comes to remedying low memory: buy more. Memory is sold in all shapes and sizes and typically if, say, your servers are holding about all the information they can and need a memory upgrade, it’s just a matter of installing more RAM into the machine. That being said though, memory can be expensive to purchase in large quantities and many companies will want to avoid this entirely: don’t avoid this entirely. Whereas it can be expensive to upgrade a device’s memory banks, it’s more expensive to lose a server for extended periods of time because it ran out of space to write information.
Memory is an odd subject with computers, due to them storing information much differently than we do. As such, people often become confused when their computer develops a memory issue. Things likes low disk space are common and easily fixed, though there are some more obtuse issues that can crop up with memory, like what to do when a hard disk becomes physically damaged and writing to the disk becomes nearly impossible. Should something like this occur, you should immediately contact your IT professional.
It’s common knowledge that laptops and PCs can overheat when improperly treated, but servers are possibly even more vulnerable. Servers are typically left continuously running in a confined space and overheating can seriously threaten your data and business continuity. But overheating is a multi-faceted issue, and numerous reasons can be the cause; everything from the temperature of the room, what programs are running, to CPU overclocking.
How Computers Handle Heat
As electricity is carried throughout your device, it inevitably generates heat that can potentially damage your device if not cooled properly. This is typically done with heat sinks and cooling fans inside your device. The cooling fan you’re probably familiar with; it creates the “whirring” sound associated with booting up a computer. The fan has variable speed settings, and will speed up or slow down depending on how much heat needs dissipating; you may notice when you boot up larger programs you can hear the fan speed up in response to this. Heat sinks you may not recognize if you weren’t looking for them; they are small metal fins standing perpendicular to their mount. Heat sinks work by simply providing a conductive surface for heat to transfer to; bigger surface area, means less heat. There are a few other less common cooling systems, even liquid cooled devices exist, though you won’t typically encounter these in an office or home setting.
What Exactly Does Overheating Do?
Overheating can be more of a problem than most people suspect, as it’s typically associated with simple crashing and rebooting. Computers are designed to avoid internal fires and melting points for obvious reasons. Because of this, most modern devices are built with fail-safes that will begin to shut down certain portions of the device if overheating begins-likely culminating in a crash. Best case scenario, you reboot your device and everything is fine, provided you’ve removed your device from the heat source if possible. But overheating can wreak havoc if the conditions are right. Simple physics tells us that when things heat up, they expand. This is very bad for computers; if the computer overheats to this point, it can physically warp your hard-drive making it inoperable.
Not only this, but small amounts of overheating can slow your device, and even shorten its’ lifespan by up to two years. Most computers are designed to have a maximum internal temperature of 80 degrees Fahrenheit, if you are consistently running that or above, you may be killing your device without even knowing it. All of this sounds bad, sure, but what does it mean to your business? An overheat of say, your host server, can mean a crash that will keep the system down until the server can be properly cooled and re-booted. This may take ten minutes, or it may take three hours-and time is money.
How Can I Prevent My Device from Overheating?
There’s a few different common causes from overheating that most people (especially those handling important data) should know about. For personal computer or laptops, always make sure the heating vents are unobstructed. If you have vents on the bottom of a laptop, for instance, be sure to rest the device on a hard surface while operating, soft surfaces like your carpet and cotton will insulate the vents and can cause an overheat. Another way to prevent heating issues is to simply clean your device every now and again. Dust built up on the inside of a device acts as an insulator and will lead to higher running temperatures, as well as being able to clog and stop the cooling fan. Another common one is that if you’re using a PC-do not operate the device with an open case! There’s a rumor or two floating about that cracking open the side of your PC casing can give it a better airflow and help it cool-what this actually does is it serves to disrupt the airflow of the device’s cooling fan and it presents your computer internals to external debris and dust which can eventually cause an overheat, or even damage from outside debris getting into the box. Another important aspect to look at it your devices’ location, try to stay away from tight isolated spaces like desk drawers; compact and seemingly convenient as it might sound, the ultimate result is that tight spaces means poor air circulation, and higher running temperatures. A popular trend amongst gamers and people wanting more out of their PC is overclocking. Overclocking is at its base form, forcing the CPU to run faster than recommended. This won’t cause instant death; however, should you choose to overlock your CPU be aware of your operating temperature-it will increase. PCs and laptops aren’t the only devices susceptible to overheating, though. Your servers are just as, if not, more vulnerable to heating issues. Location is one of the largest issues to look out for when it comes to server heating; when placing your server, you want to make sure the location is well-ventilated, large enough to allow cool air to circulate, and you want it to be void of open windows. When placing your servers in racks, you also want to make sure they are arranged the same direction, so one server is not blowing hot air into the intake vent of another. Also, one last note for proper server care, make sure your server room’s A/C is set for optimum device cooling and not people cooling-remember computers shouldn’t run above 80 degrees so they have to stay much cooler than we do.
Computers and other devices can talk to each other, but computers on their own can only handle so much information; if you tried to host all of, say, Google on a single server, it’s simply not possible without a server bigger than your average house. Computers can talk to each other in networks through various means of connection. This connection can be crucial to your operations as a company, or to how fast you can get that cat video to buffer at home. Firstly, for those people unfamiliar with the basic concepts of connectivity and networking; we offer a little primer.
Computers are intelligent things, insofar as they can handle a great deal of information, but they’re limited by the amount of information a hard drive can hold. This is where networks come in, the concept to get two (or more) devices to share the information they hold. When these computers are connected, they can share information, but the method of connection itself dictates how fast information can be transferred and how far that information can be transferred. A common type of connection you may have heard of is Ethernet. Ethernet is a type of cable (usually a thick, white or blue cable with a white/clear jack) that runs from the back of most devices into whatever provides your network capabilities (likely a router). An Ethernet cable works very much like a highway; you have one centralized avenue for information to travel (that’s the cable itself) with multiple small “driveways” so information can leave its host device to travel on this “highway” (the “driveways” are the Ethernet ports). Information can then flow more or less freely between devices. Once that occurs, you have your network. Another common connection for computers is one you most likely experience everyday: Wi-Fi. Wi-Fi, at its’ core, is data transfer via radio waves. Wi-Fi is different than Ethernet insofar as the data transfer is typically slower, but the lack of cables and maintenance means more reliability and ease of use, though it is less secure. Trading ability for convenience, though certain advancements in Wi-Fi have recently allowed for transmission speeds approaching (but not matching) Ethernet cables. Fiber optics are a newer transmission type with incredible transmission speed, though they are very fragile due to their glass cables, and much more expensive than other options. The basics of how they work is: in lieu of radio waves to transmit data, fiber optics use light, allowing incredibly fast transmission speed.
Why does connection matter?
It seems like a silly question, but for many people how they have a connection is irrelevant as long as they have one. Largely, people are satisfied to be connected and don’t think about things like network speed. Sometimes your Wi-Fi signal may be blocked by a wall (older buildings may have block walls or cement ceilings which can result in poor signal), or your Ethernet cable might not be connected on both ends. This all seems trivial until you’re attempting to pull a crucial document off a networked server and it won’t download. Or consider a skype meeting across continents to ensure a deal goes smoothly and the video keeps failing. Most modern companies use computer networking in some way; advertisement via website, grouped workstations, usage of cloud servers; these all require an internet connection, and it can make a real, monetary difference to know the difference between your provider having an issue or a poor signal because someone installed your router behind a brick wall. You should also be careful when accessing public wireless. Typically places like Starbucks will have an unencrypted free public Wi-Fi; you should be careful on these networks and avoid using anything that requires a password: email, banking, and shopping to name a few. These networks are easy prey for people looking to intercept personal information. The internet is not the quiet, gentle place it once was.
What can I do about my connection?
There’s a variety of ways to improve your internet connection on your own without rousing the beast in your office that is the IT department. These methods can be situational though, and vary depending on the problem and type of connection. First, you need to determine that it is in fact a problem with your network connection; what type of computer do you own? Some models come with radio switches that can turn the radio inside of your computer on or off – if it’s off, you’re not going to be connected to the internet anytime soon. Also check to make sure you’re connected to the correct network – Wi-Fi has a limited range so if you’re trying to connect to a network some distance away you might encounter difficulty. On that note you should always know whether you have a wired or wireless internet setup; you can tell this by the connection icon in the lower right of most PCs.
A few examples of common symbols used to express your devices’ Internet connection
Another question to ask: are you the only one having issues? Ask around, see if anyone else can connect with the Internet – if they can’t, it’s probably not an isolated problem to you. So how do you determine where the problem is when it’s not just you? Go to adjacent office, ask your neighbor if they are having any trouble. If they are (and they use the same service provider) there is likely nothing much you can do, since it’s on the provider’s end. If they’re not having issues, it’s most likely a problem with your network. So what’s the issue exactly now that we’ve determined it’s your network? If everyone is still connected but has a weak or sporadic signal (1-2 bars for Wi-Fi), check your router. It may be that your router is placed far away from the machines it’s connecting, or it may be obstructed. Radio waves can travel through walls but thick walls like concrete can severely weaken or block them. Resetting your router can often help, but you should never do this without checking with your boss/notifying your employees; the Internet might stay down and that can hurt everyone. Also before handling a router be careful! Some routers are more complex than others and it has the capability to do damage and loss of company productivity if you just start flipping switches. Beyond these basic solutions, it becomes a good idea to contact your IT professional.