It is hard to believe that the Internet does not have very many decades behind it. As the 1960s became the 1970s, the dawn of new ideas for Internetworking emerged. Fast-forward to today, and the entire landscape through which we connect ideas together has been consumed by this digital space for thoughts.
Information Technology is inseparable from communication and collaboration because of the speed at which it operates, and its natural tendency to evolve faster than most of us are able to comprehend. As the whirlwind of dust from launching this new frontier settles, we are beginning to find proven methods for communication and collaboration that efficiently utilize the new technology available to us, and it is clear to see the improvements that Information Technology has afforded us through the manner in which we conduct our business.
Provide an Open line of Communication
Most services have been digitized into the Information Technology realm. Voice Over IP brings phone conversations to the realm of 1s and 0s along with email, chat, video streaming, file-sharing, etc. This provides a method to track all communications singularly instead of managing communications stored over multiple accounts and performed over multiple mediums that may not otherwise integrate. Services such as Office 365 provided by Microsoft supply an integrated email, chat and video conferencing experience hosted in the cloud, with a centralized method to index and store the communications for fast searching and reference.
Provide a Creative Space
The digital realm provides an effective method of controlling user work environments and keeping data
integrity secured using centralized policies and automated responses to conditional triggers. Maintaining a clean digital workspace supports creative focus and productivity. Imagine for a moment that you have a
home that is perfectly spotless. You are sitting at the couch in the living room, drinking a glass of water. When you finish the water, you put the cup down on a table - and forget the coaster. Immediately, a robot zooms out, picks up the empty glass, wipes the condensation from the table-top and takes the glass to the kitchen and puts it in the dishwasher, and then returns to his stand-by position in the closet. The environment is as if you had never disturbed it - that is the power of automation in Information Technology (and amazingly enough is very close to being a real-world robot scenario, as well. Roomba's are scurrying over hardwood floors throughout America, tormenting dogs and cats who are just trying to bask in the glory of their mess for a bit before returning to sterile modern living.)
Remembering things is hard, and I find that it only gets harder as I age. You know what doesn't have
trouble remembering things? Computer storage. Unless something happens to damage a device, computer storage is a precise memory system that cannot lie, and will remember whatever you tell it is important. This is extremely useful when collaborating between large groups of individuals. Whether you are using Microsoft SharePoint online to upload all of your documents and work on them simultaneously with your department, using Exchange and Lync online to send, store, consolidate and index all of your communication threads, or engaging Trigon Replay & Trigon Online Backup to store and maintain versions of documents from your local computer or server infrastructure, you can be sure that Information Technology is remembering all of the things that you have forgotten about.
Efficient and effective troubleshooting is truly a learned and developed skill. There are many published methodologies and best practices to follow which certainly have their merit, however, like most things in life, the guidelines are great but experience is required to truly hone one's skills. Below are just a few of the rules I follow during my own troubleshooting processes. This list is by no means exhaustive, but it provides some good insight on a few of the most valuable steps I employ.
1. Be Methodical - Do not just start clicking around thinking you may quickly find the answer and resolution. It is important to first analyze the system and understand how it should function under normal operations, then track the sequence of tasks required in order to determine exactly where the fault lies. What is occurring that violates normal operations? Was a recent change made? What seems out of place? If possible, use properly functioning systems as a source of comparison.
2. Document - Take notes on your initial findings, test results and overall progress. This doesn't just help you to track your progress, but often times it can become valuable in the future, if a similar issue happens to arise.
3. Don't Make Assumptions - Sometimes it's very easy to make assumptions, based on what we believe to be the truth. However, we may not always have a full understanding of the situation, even though we think we do. We also like to give others the benefit of the doubt and assume the work they performed was done correctly. Ask questions and double check configurations. Also, don't skip the obvious. It's easy to assume that everything on the physical layer is working properly, but without any empirical evidence, it's simply not a safe assumption.
4. Use Advanced Logging - Often times a log may report a problem, but the event description is too vague. Complex systems and applications usually have an advanced diagnostic logging function built in, but not enabled. If you know what you're looking at, often times these logs can point you to the exact problem.
5. Use Process of Elimination - Develop testing criteria then run through the tests by only altering a single variable each time. Eliminate possible causes based on your test results, then move onto testing your next culprit.
6. Utilize Vendor Support - It is easy to burn time troubleshooting a third party application because we often hesitate to contact support, knowing what a cumbersome process it can be before actual assistance is obtain. I always perform an assessment early in the process, then make the initial contact with the proper support channels as I deem necessary. I also always ensure I provide the basics upfront; OS versions, software versions, any specific variables to considers, steps I've already tried, etc. If you don't provide this information you will just delay the time it takes to obtain assistance since the vendor needs to gather this information anyway. They may even end up advising you over email to try certain tasks which you have already performed. I try to be as clear and thorough as possible so my request goes exactly where I need it to go. I also do not stop troubleshooting once I contact the vendor for support, unless there is a specific reason to do. If you resolve the issue before they contact you, great, and if not, you’re already that much closer towards a resolution.
In order to improve your own skills, first work on developing the individual components of your overall system. Once you have specific processes in place, abstract them in order to create a system which is applicable to each situation you encounter. Be sure to consistently use this newly minted system while simultaneously gauging its efficacy, then tuning it where required. This general approach will ultimately lead to less frustration, a quicker time to resolution and most importantly, more satisfied customers.
Direct Access is a feature introduced in Windows Server 2008 R2, and greatly improved upon in Windows Server 2012. I consider the introduction a bold one, because at the time it required a fully-envisioned IPv6 infrastructure, which is still being implemented incredibly slowly throughout the Internet. Lo-and-behold, with Windows Server 2012, Microsoft scaled-back the tenacity with which they were pressuring for IPv6 deployments and made DirectAccess available to us via simple SSL over IPv4.
What is DirectAccess?
Direct Access is a means by which your enterprise workstation is able to ‘phone home’ without any assistance, such as would be required to access a VPN configured through a firewall or a Microsoft Routing and Remote Access Server. The idea is that you are always able to route back to your Microsoft network using public IPv4 DNS records via the Secure Sockets Layer, similar to how you would sign-in to a secure web page for sensitive information, such as personal banking. This eliminates the need for integrating a service like RADIUS to provide domain-based authentication and deploying an VPN client software to all of the systems (not to mention training your staff on how to use it.)
Why use DirectAccess?
Simply put, DirectAccess eliminates one more step that is needed to remotely access a corporate environment, and reduces the surface area for end-user error. Since it uses the Secure Sockets Layer – which is shared by the aforementioned secure web browsing, variables such as remote routers and firewalls can also be eliminated as a variable since there are usually no restrictions on the SSL port, whereas a non-SSL VPN client would require that specific additional ports be opened at the connecting network, relative to the protocol being used.
A problem that used to exist in the old DirectAccess architecture of Server 2008 R2 was the reliance on IPv6, which as I mentioned can be a big project in itself to implement on a network that is not already using it. Server 2012 Direct Access is fully IPv4 compliant, and the configuration of it
What do you need to run DirectAccess (Windows 2012)?
DirectAccess requires the following components on your network:
- Client workstations running Windows enterprise software (Windows 7 Enterprise or Ultimate, Windows 8 Enterprise)
- If using Windows 7 clients, a local Certificate Authority is recommended to provide client-authentication certificates for backwards-compatibility. This is not a requirement in Windows 8.
- A Windows Server 2012 host with a network controller
- A Windows domain controller (running Windows Server 2008 SP2, or a higher edition) and DNS server
Contact Trigon today if you would like more information on Direct Access and how it can improve your small business!
Hard drive encryption is one of those tools that administrators have a love/hate relationship with. In its simplest terms, it is a way to secure data so that it is inaccessible to those that are not authorized to have access. There are different ways to encrypt a hard drive, depending on how secure you want to make the information and how easily or difficult you want to make access to that data.
First off is the basic hardware encryption. This typically requires a simple password to unlock the drive for use. As soon as the computer is booted, a password is requested in order to be able to use the drive. If the incorrect password is entered a certain number of times (typically three), the system needs to be either rebooted or power cycled to be able to try again. If the password is not known, the drive is not accessible.
Secondly, there is software encryption. This is a program either installed on or integrated with the Operating System that can use any combination of ways to unlock the information encrypted. It could range from a simple password like the hardware encryption technique described above, to requiring a security certificate and password combination, to one that requires a specific hardware aspect (such as a specialized flash drive inserted into the system) and one or more software measures (password, security certificate, and/or biometric reader) that are all required for access.
So why do administrators have a love/hate relationship with it? The good part about encryption is that if a drive or system is either lost, stolen, or somehow ends up in the wrong hands, it is difficult or impossible to break the encryption to access the information. Obviously, the more secure the measures to encrypt the data, the more difficult it is for any would-be hackers to access the information that they are not authorized to have.
The bad part about encryption is if the access method to the data by those authorized to have access is not available (password forgotten, specialized flash drive left in the office when the laptop is taken home, etc.), the data is not available when it is needed. This typically is only a nuisance, as the password can be retrieved if either a Master Password or some other password-recovery method is utilized or the specialized flash drive can be retrieved from the office the following business day.
The ugly part comes in when something out of the ordinary occurs. Most of the time, this is something along the lines of a particular person having the encryption password with no Master Password created, and then that particular person leaves the company. Or it is the specialized flash drive that gets broken or is unreadable by the system. Some of these types of risks can be mitigated by having a recovery measure implemented, like a Master Password or a secondary flash drive with the decryption information stored on it. However, not all risks in regard to hard drive encryption can always be avoided, as sometimes information is encrypted and should only be accessed by one person for security reasons.
So should you use hard drive encryption? The answer: it depends. How secure do you need your data to be so that if it does fall into the wrong hands, it won’t be easily accessible? What steps are you able and willing to implement to mitigate the risks imposed if the primary access method is permanently lost? What is the risk of the data being lost/stolen versus the inability to access that data?
If you need help with a disk encrypytion solution, please contact Trigon today!
Building a robust, secure, scalable and reliable IT infrastructure can be very costly. Conversely, not being prepared for an incident that could bring down the entire company for an extended period of time could be devastating and even more costly.
IT Risk Management is the process of defining and understanding the possibility of risk and the potential damage it could have on an organization. IT Risk Management is usually comprised of one of the following four areas:
- Security – Ensuring that corporate data is protected from both external and internal threats
- Availability – Making sure that systems are able to be accessed at all times. Or, in the case of an outage, that the impact can me limited and the systems can be recovered quickly
- Performance – Baselines are established and this metric is monitored regularly
- Compliance – Proper policies should be enabled to ensure that regulatory agency requirements are strictly adhered to
These identified risk areas are not the sole responsibility of the IT department. While there are technical components and business processes that must be managed by IT, employee training is extremely important. Even the most stringent security policies cannot prevent a security breach. The end users must abide to the policies accordingly and work within established guidelines on a consistent and daily basis. Security is a shared responsibility.
If you have not trained your employees on how to recognize and report possible risk or security concerns, Trigon Technology Group has a proven Security Awareness Program that can help your workforce make better decisions and, ultimately, lower your IT risk portfolio.
For more information..
With the expansion of use of DAGs in Exchange 2010 there are some uses for a DAG that spans datacenter. When bandwidth and latency is good everything is smooth sailing. When either of these things go wrong things can go horribly wrong. There are a few small but important things you should verify before implementing this. Also leave time to test the setup before going to production. There are a few situations where without the proper configuration and hotfixes for your environment where things can go very very badly. If latency is high things like a race condition can cause your cluster to lose quorum, and when this happens all mailboxes can become dismounted. Check out this link and get those hotfixes in place.
- Verify your NIC settings
- Verify your TCP settings
- Verify your cluster settings
Network Card Settings
The following settings should be checked on your network cards these are important for all DAGs but are more important when you are spanning datacenters across a site-to-site VPNs. When everything is local and bandwidth is prime than the little things don’t matter as much. One thing I found is that IPv6 should be enabled on all NICs for exchange, as well as making sure everything is correct for routing and DNS is correct. Make sure you all replication and mapi networks are reachable and configured correctly.
There are a few settings that can improve DAG performance dramatically, they help with the replication traffic for the DAG.
Everything in this article
netsh int tcp set global chimney=disabled
netsh int tcp set global rss=disabled
set value to 0 to disable netDMA
The above settings do not seem to play well with Exchange, and disabling all of the above will make things much happier, be sure to do this on all mailbox servers. It would be nice if this was common knowledge but it doesn’t seem that many documents cover this.
The following settings can help prevent major issues if your DAG is physically spanned across datacenters, these following settings will help keep your dag resilient, and prevent a few issues that network latency can cause you and some massive headaches.
cluster /prop CrossSubnetDelay=4000
cluster /prop SameSubnetDelay=2000
cluster /prop SameSubnetThreshold=10
cluster /prop CrossSubnetThreshold=10
To verify run the cluster /prop command.
The cross subnet command will help tremendously as the default 1000 and a threshold of 5. They will keep your DAG running smoothly in a spanned DAG.
I hope the above can help someone.
Configuring a Cisco Wireless access point out of the box seems like a daunting task but if you are implementing a simple setup, a single SSID on your flat network the configuration can be completed much easier than you would believe.
Starting the device
After you have un-boxed the device, and powered it up you should connect to the device via console cable. The default username and password are Cisco and Cisco as well as Cisco for the enable password. I would recommend changing these before putting this in a production environment. Following this from the terminal you need to enter configure mode before entering the configuration commands. I recommend doing this configuration from the command line as the GUI is slow un-responsive at times and requires multiple steps to complete the same tasks.
Starting in configure mode create your SSID with the following commands, as well as establish your WPA preshared key.
dot11 ssid NetworkSSID
authentication key-management wpa
wpa-psk ascii (Type password here)
The above will establish the SSID but it will still need to be assigned to the correct interface and turn on the wireless radio. Cisco ships their WAPs with the radios off, there are warnings all over the device and packaging regarding this.
encryption mode ciphers aes-ccm tkip
that will set the encryption, and get the device up and running with the SSID being broadcast.
The last thing to do is configure the device with a management IP address.
interface BVI 1
ip address x.x.x.x y.y.y.y
There you have it the fast way to get a Cisco Aironet up and running, cheers!
Google’s web browser Chrome has been praised as one of the most secure web browsers available due to the security features that were built into the browser. One such feature is sandboxing which allows a piece of code the ability to run in a restricted environment but does not allow it any I/O access such as the ability to write to the hard disk. Sandboxing has played a huge part in making Chrome as secure as it is. For three years Google participated in an event called Pwn2own which is a competition to find security holes in popular web browsers in the hopes of learning if there were any holes in Chrome that need to be addressed. Pwn2own has laptops setup running fully patched versions of Mac OS X and Windows 7 with Internet Explorer, Safari, Firefox and Chrome. Each year Chrome came through unscathed. This year though Google opted not to take part in Pwn2own and instead created their own competition named Pwnium. Here they have offered contestants money for finding and exploiting security holes. At Pwnium a full exploit was discovered by Sergey Glazunov. The details of the exploit have not been released yet but what is known is that Sergey managed to bypass the sandbox and gain full control of the computer using the access rights of the currently logged on user. Google has quickly patched the exploit and released it via Chrome’s automatic update feature.
I personally have to applaud the efforts of the software companies who take part in Pwn2own and Google with their Pwnium competition in trying to make the web a safer place for everyone. If you’re reading this and have questions in regards to security for your network then contact us and find out how we can assist you.
Every company has data leaks.
It is impossible to plug every one of them. It is possible to manage them though. The data leak doesn’t have to be access to the network by nefarious individuals. Most likely it is your own employees taking action without really thinking of the consequences. Like the domain admin giving a domain level account and password over the cell phone in a crowded elevator -- He was trying to solve an issue, but missed the environmental conditions he was in. So now everyone in that elevator knew a domain level admin credentials for that company. But you don't know which company he works for you say...sure we do, just look at his ID badge clipped for convenience to his clothing. It’s the little things that get you in trouble too.
Some sources of data leakage are:
- Allowing access to personal email, staff can send out data without you tracking it.
- Allowing USB usage, staff can put in a USB drive, phone, and even IPods that can sip the data.
- Sensitive papers lying about on desks unsecured to be viewed by anyone.
- Talking about sensitive information in public spaces
How to manage this? Well there are several ways. The most successful way is to institute policies for your staff. Having the staff aware there are guidelines and consequences is addressing most of the issues. You will need to have a training schedule for new hires and reviews for user. Having the user acknowledge the policies with a signed document will provide you foundation for maintaining the security. These policies can be as simple as the clean desk policy which dictates what can be left out when a user is not at their desk, to technology policies which dictates what devices are allowed into the site and how they are used.
To support the policies you can leverage technology. Using Active Directory Group Policies to control access to resources on the network, device usage such as turning off USB ports, all this is possible. You can use third party applications to control web access to email, track access, and allow access.
This may seem draconian, and it can be if misused. But the trick is to apply the right amount of restrictions to protect the company and balance the access given for work. So you can't access Facebook on your work computer, big deal, you’re working. Odds are you have it on your phone anyway. Using technology to enforce the policies will enable you to maintained standards consistently all day to all staff. It is auditable and can be changed as the environment changes.
So keep you staff informed, your policies current, and use your technology to simplify and standardize, and revisit both often for review and updates.
Microsoft spokespeople have been coy about when the Office 365 cloud service will launch, saying only that it will come out later in 2011. But CEO Steve Ballmer has revealed that it will launch in June.
Speaking in Delhi, India, to an industry group last week, Ballmer said, "We're pushing hard in the productivity space. We'll launch our Office 365 cloud service, which gives you Lync and Exchange and SharePoint and Office and more as a subscribable service that comes from the cloud. That launches in the month of June."
The cloud service will replace the current Business Productivity Online Suite (BPOS), and include access to Exchange, SharePoint, the Lync unified communications suite, and both desktop and Web-based versions of Office tools such as Word, Excel and PowerPoint. The Office 365 beta has attracted more than 100,000 customers, and was recently expanded to become a public beta available to anyone.
Whoa, now. It seems like Microsoft is finally ready to get the good folks that their Office products to learn how the Cloud can help them. Don't be afraid of the Cloud, gang. We use it every day. I'm using the Cloud to write this post. Ahhh!!
I don't know about you, but I prefer to write documents via web browser, or to a lesser extent, a service that syncs automatically with Dropbox in order to store my files safely. Using Word is great and all, but if that HDD explodes while you're writing the best blog post ever, it's as good as toast. Apps like PlainText and Elements save while you're typing to the Dropbox folder of your choice. Late or not, Microsoft seems to be getting the idea with this instant-save business. The less the user has to worry about backing things up, the better.