Spam almost feels like a part of life anymore. I expect that when I open my email, I’m going to see a couple spam messages that the junk folder settings just didn’t catch. It barely fazes me anymore, I just delete the spam in my inbox and junk folders and go about my day. The real problem comes in for those that receive far more than just a couple spam messages in any given day so I can understand the frustration that comes with having to go through and delete all of those. Thankfully for Office 365 customers, Microsoft provides a means of being able to block those messages from landing in anyone’s inboxes to begin with. That tool is called Forefront Online Protection for Exchange.
Forefront makes it extremely simple to create rules that can block or allow emails to come through based off of the following information:
Now not all of this information is needed for each rule, but it’s great that rules can be setup to block or allow emails of a certain topic or from a specific sender or domain. Based off of my experiences there did not appear to be any delay in the mail filtering after the rule was setup. If you’d like to start blocking some of those pesky spam messages then follow these steps to set up rules of your own:
- Log into the Office 365 portal using an administrator account
- Click on the Manage link listed under Exchange
- On the left hand side of the page select Mail Control
- On the right hand side of the page click on the link labeled “Configure IP safelisting, perimeter message tracing, and e-mail polices.”
- If not already selected then click on the Administration tab at the top of the page
- Select Policy Rules right below the administration tab
- On the right hand side of the page click on the New Policy Rule link
From there you can go ahead and set the parameters for the rule and once done select Save Policy Rule. After that you’ll see your new rule listed under Policy Rules and from there you can make modifications to the rule as necessary or delete it if it’s no longer needed.
Based upon what I’ve seen, Forefront doesn’t always catch blatant spam messages on its own without the aid of the rules, however after creating rules I have seen the amount of spam messages received drop dramatically. All in all I have to say that Forefront Online Protection for Exchange is a great tool for use with Office 365.
One of the hardest things to troubleshoot in the realm of networking is wireless connectivity. There are a number of things that you can do in order to resolve wireless issues quickly and with little technology expertise.
First of all, if you are unable to establish a wireless connection, there are a few basic troubleshooting steps you can take. The easiest task to complete is to simply reboot your device (laptop, smart phone, iPad, etc.). Sometimes, that will resolve the issue and you can get connected. The second thing to try is to reboot the nearest wireless access point (WAP). If you reboot that device, many times you can then connect as expected. After rebooting your device and the nearest access point and the issues isn’t resolved, the next step would be to remove the wireless connection from your device and try to connect again. This is true even if you were previously connected but just cannot access the network now.
To recap, the troubleshooting steps are as follows:
- Restart the wireless access point
- Reboot your device
- Remove the wireless network settings from your device
- Disable your wireless card
- Re-enable my wireless card
- Re-connect to the network
Granted, that is a lot of steps but it will usually resolve your issue.
Another thing that may be necessary is physically relocating the wireless access point so that you are in closer proximity to where you intend to use your device. In a business environment, this may not always be the easiest task to complete, as there typically is a dedicated Power over Ethernet (PoE) network connection to where the access point is plugged into the network.
In a business environment, more work goes into completing a wireless site survey. Specialized tools and utilities are necessary in order to optimally locate WAPs for coverage. If you do not have wireless access and are looking to implement it, completing a site surveys will save you time and frustration down the road. One item that is reviewed during such a survey is the makeup of the actual building itself. If there is a lot of “brick and mortar”, you will need to have more access points installed than one that is made up predominantly of drywall and drop ceilings. Wireless signals are able to easily penetrate this type of material but the thicker the material, the more difficult the signal has of making it through.
Another item reviewed during a wireless site survey is the type of coverage you are looking for. If you want to have “access everywhere”, you probably will need additional access points. Whereas, if you only need access in “most” locations, you can typically use a few less devices. For the access in the most often used locations, a centrally located WAP can typically be sufficient but, as you move further away from the access points, you will not have connection at all or will have limited connection speed. For instance, if you have a 300 foot hallway and you only need access in the majority of areas on either side of the hallway, you can probably install two or three at specific intervals. If you require access in the very corners of the far ends of the hallway, you may need to put in additional devices, which will increase the range.
Now comes the fun part – are you going to be moving down the hallway and need constant communication or are you able to drop your connection for a second or two? For those that need a constant session from one end of the hall to another, you will need a more advanced wireless network that will be centrally managed. A controller will allow the access points to “talk” to each other as you go from one device to the next. If you don’t have a network that has the access points talking to each other and they are all simply connected to the network to provide wireless service, you will lose connection from one access point as you leave the range of that WAP and will then pick up another access point as you move into its available range. During this change of access points, you will temporarily drop your network connectivity. When having a wireless site survey completed, this scenario will be reviewed and steps will be taken to ensure that you will never be too far away from any access point and you won’t lose connection.
If you are interested in discussing the pros and cons of a wireless site survey, please contact Trigon at email@example.com or (484) 323-5000.
If you own a local business in the Central PA / Philadelphia region and accept payment through credit cards, you should be familiar with filling out an annual Self-Assessment Questionnaire (SAQ) to keep current with your Payment Card Industry Data Security Standards (PCI DSS) compliance requirement. Depending on how your business accepts credit cards, whether it is through a Point of Sales system (POS), a web based e-commerce portal, or card-not-present transactions over the phone, adhering to the requirements of PCI DSS and becoming compliant may a daunting task. Businesses with payment card systems that store cardholder data are required to complete SAQ D, which is the strictest of all the SAQ’s. This form has 288 questions and requires many policies and procedures to be implemented. Qualifying for one of the other SAQ’s, which are basically subsets of SAQ D, would lessen the burden on becoming PCI compliant. But don’t confuse an easier SAQ with being less secure. The requirements of SAQ’s A, B, CV-T, and C require an even more secure infrastructure as it relates to PCI and cardholder data by not allowing any cardholder data to be stored on any system in the environment. So you ask, “How can I prevent the storage of cardholder data and move from SAQ D to SAQ C?” The answer is Tokenization.
Tokenization works by replacing cardholder data in your payment system with a unique string of characters called a token. This token is generated by a payment gateway, such as PayPal, and is sent back to the merchant’s payment system to take the place of the cardholder data. The gateway then completes the transaction with the acquirer or bank. Any future transactions utilize the token, removing the need to transmit cardholder data. By eliminating the card holder data onsite, tokenization greatly reduces the scope and risks associated with PCI security standards. While Tokenization does lessen the burden on your annual PCI SAQ, it is always best to consider all of the guidelines outlined by the PCI DSS. It is always recommended to implement as many security best practices as possible even if they are not required by compliancy.
While Trigon does not implement Tokenization directly (your payment system vendor or acquirer would assist you with this), Trigon can assist you with becoming PCI compliant. Trigon has extensive knowledge and expertise in securing data infrastructures and is proficient in vulnerability management.
If you have any questions or would like to discuss how Trigon may help you with your PCI requirements, contact Trigon today!
If only Alice had a pair of these wicked specs, she may have ignored the brazen pocket watch holding bunny and steered clear of the rabbit hole. Then again, in my initial foray, I must admit that I was not impressed with the physical appearance of Google Glass (No, I am not a slave to fashion but I do not want to walk this earth as a live action model for a ‘B’ rated Tron movie!) However, upon further investigation, the sheer capability of Google Glass has me strongly considering a life outfitted in iridescent electrode clad skin suits. I mean… c’mon, look at these bad boys:
…errrrr….. wait a minute, that’s not it – ahhhh, here we go:
Uh-huh, I knew it. You think they are pretty sweet also! If you have not been formally introduced already, ladies and gents, let me present to you Google’s latest and greatest (we can dispense with the drum roll) Google Glass!
At first glance (without your Google Glasses on, of course) Google Glass is a wearable computer that utilizes a head mounted display to provide smartphone features and functions in a sleek and futuristic hands free format. Further investigation of this device reveals that they do more than make you look trendy. Boasting such features as the ability to take a single photo or shoot a video, send a text message to your buddy, get directions to your favorite hideaway or confirm flight information all by using simple voice commands. Truly remarkable!
Google Glass was introduced to the public in August of 2011 with plans to have a version available to our greedy hands by the end of 2013. Initial testing of the glasses was coordinated through the Glass Explorer Program (the program has since closed but you are able to place yourself on an informational waiting list) and live field testing is being completed by the ‘bold, creative individuals’ who answered the call on Twitter or Google+ to explain in 50 words or less on how their lives would be enriched by using Google Glass. I am sorry that I missed this bus!
Though the concept of a wearable ‘Heads Up’ display type of technology is not new, packaging this technology into a universally acceptable device that is anticipated to weigh less than an average pair of sunglasses is. Those of you fortunate enough to have answered the call, while still holding on to your 32oz double caf whipped caramel skim latte and noshing on your organically correct marathon muffin, can begin to enjoy surfing the web, taking photos, texting all by simply saying ‘ok glass. . . ‘ I am truly at a loss for words at how cool this technology is going to be and I am sure, once Calvin Klein, Ray-Ban and Oakley throw their hats into the ring for lens development and options, we will truly see and appreciate Google Glass’ full potential. If you have not checked this technology out, do yourself a favor, stop what you are doing, do not pass go and do not collect $100. Get Google Glass on your mind – I already know the first question I am going to ask – ‘ok glass, what IS really at the bottom of the rabbit hole. . . ‘
If you have any questions or would like to discuss how Google Glass may affect the way you work, contact Trigon today!
With the release of Windows 2012, Hyper-V 3.0, and the new System Center Suites, Microsoft truly has a developed a hypervisor solution which can rival the entire feature set of VMWare's vSphere platform. I tend to prefer and recommend Hyper-V over VMWare for a few reasons, with the prominent reasons being cost and availability. It’s true that in the past Hyper-V's feature set was not as strong as VMWare's, but for most SMB's the feature set was indeed enough to satisfy their business and technical requirements. The cost of VMWare could often only be justified in larger organizations with bigger IT budgets and more demanding requirements.
Things have now changed. The gap between Hyper-V and VMWare has slimmed to a point where it’s now virtually non-existent. True, VMWare provides a solid and mature hypervisor platform, but many organizations are now seriously considering Hyper-V over VMWare due to its new bolstered feature set, low cost and overall ubiquity.
One of the newest features in Hyper-V 3.0 that I've worked with lately is Live Migration. For those who are familiar with Hyper-V Clusters, this concept is not new since Live Migration is one of the pinnacle reasons to implement a Hyper-V Cluster. For those not familiar with clusters, there should be some clarification as to what the new Live Migration feature entails in a typical setup. That is, a setup with a few standalone Hyper-V hosts and no shared storage.
When you’re working in a cluster and initiate a Live Migration, ownership of the virtual machine and its working memory are simply transferred from one node to another (see: http://trigon.com/tech-blog/bid/35259/Microsoft-Virtualization-Part-2-Live-Migration). Depending on the RAM consumption of the VM, this could take seconds, or just a few minutes. With the use of shared storage, the VHD's don't have to move, so basically its quick process with no downtime incurred.
In order to avoid confusion, a distinction should be made between a Live Migration in a clustered/shared storage environment and a Live Migration in a standalone environment. In Hyper-V 3.0, and in our "typical" scenario, a Live Migration is essentially a Live Storage Migration. This Live Migration does indeed keep the virtual machine online, but it copies the entire VHD and VM config files to the new host, not just ownership of the VM and the working memory state. It is a much more time consuming process but by allowing administrators to perform this type of Live Migration directly from Hyper-V Manager adds a great deal of flexibility without the need for an expensive or complex infrastructure.
It is important to note that Live Migration in a “typical” scenario isn't very scalable. Even if you have a dedicated 10 Gb Live Migration network, if you have 30 VMs you want to migrate, that is at least 30 VHD's that have to be fully copied over the network. Yes, you could schedule them through PowerShell and get it done, but it could quickly become a cumbersome process. The moral of the story is to just ensure you do not confuse the capabilities and function of a Live Migration in differing scenarios. Also, having Live Migration capabilities at your disposal right out of the box does not negate the need for a Hyper-V Cluster in order to provide High Availability for your virtual machines.
Direct Access is a feature introduced in Windows Server 2008 R2, and greatly improved upon in Windows Server 2012. I consider the introduction a bold one, because at the time it required a fully-envisioned IPv6 infrastructure, which is still being implemented incredibly slowly throughout the Internet. Lo-and-behold, with Windows Server 2012, Microsoft scaled-back the tenacity with which they were pressuring for IPv6 deployments and made DirectAccess available to us via simple SSL over IPv4.
What is DirectAccess?
Direct Access is a means by which your enterprise workstation is able to ‘phone home’ without any assistance, such as would be required to access a VPN configured through a firewall or a Microsoft Routing and Remote Access Server. The idea is that you are always able to route back to your Microsoft network using public IPv4 DNS records via the Secure Sockets Layer, similar to how you would sign-in to a secure web page for sensitive information, such as personal banking. This eliminates the need for integrating a service like RADIUS to provide domain-based authentication and deploying an VPN client software to all of the systems (not to mention training your staff on how to use it.)
Why use DirectAccess?
Simply put, DirectAccess eliminates one more step that is needed to remotely access a corporate environment, and reduces the surface area for end-user error. Since it uses the Secure Sockets Layer – which is shared by the aforementioned secure web browsing, variables such as remote routers and firewalls can also be eliminated as a variable since there are usually no restrictions on the SSL port, whereas a non-SSL VPN client would require that specific additional ports be opened at the connecting network, relative to the protocol being used.
A problem that used to exist in the old DirectAccess architecture of Server 2008 R2 was the reliance on IPv6, which as I mentioned can be a big project in itself to implement on a network that is not already using it. Server 2012 Direct Access is fully IPv4 compliant, and the configuration of it
What do you need to run DirectAccess (Windows 2012)?
DirectAccess requires the following components on your network:
- Client workstations running Windows enterprise software (Windows 7 Enterprise or Ultimate, Windows 8 Enterprise)
- If using Windows 7 clients, a local Certificate Authority is recommended to provide client-authentication certificates for backwards-compatibility. This is not a requirement in Windows 8.
- A Windows Server 2012 host with a network controller
- A Windows domain controller (running Windows Server 2008 SP2, or a higher edition) and DNS server
Contact Trigon today if you would like more information on Direct Access and how it can improve your small business!
Windows 8 has been successfully launched and many new computers are shipping with it pre-installed. What if your business isn’t ready for the jump to the new operating system? Never fear because Microsoft has provided a fairly painless (though a bit tricky) way of downgrading your Windows 8 Professional PC to Windows 7 Professional.
First, you will need to assemble everything that is required for the downgrade. You will need to make sure that the computer is running Windows 8 Professional (32 or 64 bit doesn’t matter but only the Professional version of Windows 8 is eligible for downgrade). Next, you will need installation media (DVD or USB) for Windows 7. Once again, 32 or 64 bit versions are both usable but you need to match the version to the currently installed Windows 8 version. Finally, you’ll require a valid, temporary Windows 7 license key. Lastly, for post installation tasks, you need a telephone, pencil and paper.
Before starting the downgrade process, it is recommended that you backup all of your files to a secure, external location like a network share or removable USB media. Then, create recovery media for Windows 8, which should include a system image and recovery disk. Once the recovery set is finished and placed in secure storage, you are ready to begin the actual process of downgrading the computer.
The actual downgrade process is the same as if you were performing a clean install of Windows 7. Insert the bootable media and restart the computer. Follow the prompts to complete the installation and input the temporary license key for Windows 7. When the PC starts up, you will be prompted to activate Windows, which will fail.
The final part of the downgrade process involves a call to Microsoft technical support. After your automatic activation fails, you will see a screen with contact information for Microsoft Activation support. Make sure that you have the activation key from Windows 8 available. Explain to the Microsoft representative that you are downgrading and provide them with the Windows 8 license key. You will be provided with a single activation code that will activate Windows 7. Once all of these steps are completed, you will need to re-install any applications and update Windows 7 with the latest set of security patches. Then, finally, you need to transfer over any of the old files from Windows 8 that were backed up earlier. To complete the process, a Windows 7 restore set should be created and placed in a secure location.
Microsoft released SQL Server 2012 on April 1st, 2012. The product is part of a tradition of scalable database solutions, and offers many great improvements over the previous releases.
The most important thing to note about SQL 2012 is its reporting capabilities and how they integrate with SharePoint. Microsoft SQL Server 2012 offers a new feature called PowerPivot, which involves a direct integration with SharePoint server and Excel 2010 and 2013 that provides real-time data views and reports. PowerPivot provides an easy way to create and share Business Intelligence for billions of rows of data.
SQL 2012 also offers new high-availability features, such as multi-subnet failover clusters. A multi-subnet failover cluster allows SQL servers in different LAN segments - such as distinct office locations - to host database failover clusters, which provide high-availability and redundancy to databases. This allows companies with distributed computing environments to utilize the server infrastructure of multiple sites.
Licensing SQL Server 2012 for hardware has been completely re-thought. SQL Server 2012 will be licensed based on processor cores, which is a new idea in the 2012+ line of products from Microsoft due to the density of processors in modern computing hardware. This can make using the hardware-licensing model too expensive, so companies may be more interested in licensing based on User or Device CALs instead, which provides a simpler planning infrastructure.
Contact Trigon today if you would like to find out more about SQL Server 2012 and how it can improve your business!
Hard drive encryption is one of those tools that administrators have a love/hate relationship with. In its simplest terms, it is a way to secure data so that it is inaccessible to those that are not authorized to have access. There are different ways to encrypt a hard drive, depending on how secure you want to make the information and how easily or difficult you want to make access to that data.
First off is the basic hardware encryption. This typically requires a simple password to unlock the drive for use. As soon as the computer is booted, a password is requested in order to be able to use the drive. If the incorrect password is entered a certain number of times (typically three), the system needs to be either rebooted or power cycled to be able to try again. If the password is not known, the drive is not accessible.
Secondly, there is software encryption. This is a program either installed on or integrated with the Operating System that can use any combination of ways to unlock the information encrypted. It could range from a simple password like the hardware encryption technique described above, to requiring a security certificate and password combination, to one that requires a specific hardware aspect (such as a specialized flash drive inserted into the system) and one or more software measures (password, security certificate, and/or biometric reader) that are all required for access.
So why do administrators have a love/hate relationship with it? The good part about encryption is that if a drive or system is either lost, stolen, or somehow ends up in the wrong hands, it is difficult or impossible to break the encryption to access the information. Obviously, the more secure the measures to encrypt the data, the more difficult it is for any would-be hackers to access the information that they are not authorized to have.
The bad part about encryption is if the access method to the data by those authorized to have access is not available (password forgotten, specialized flash drive left in the office when the laptop is taken home, etc.), the data is not available when it is needed. This typically is only a nuisance, as the password can be retrieved if either a Master Password or some other password-recovery method is utilized or the specialized flash drive can be retrieved from the office the following business day.
The ugly part comes in when something out of the ordinary occurs. Most of the time, this is something along the lines of a particular person having the encryption password with no Master Password created, and then that particular person leaves the company. Or it is the specialized flash drive that gets broken or is unreadable by the system. Some of these types of risks can be mitigated by having a recovery measure implemented, like a Master Password or a secondary flash drive with the decryption information stored on it. However, not all risks in regard to hard drive encryption can always be avoided, as sometimes information is encrypted and should only be accessed by one person for security reasons.
So should you use hard drive encryption? The answer: it depends. How secure do you need your data to be so that if it does fall into the wrong hands, it won’t be easily accessible? What steps are you able and willing to implement to mitigate the risks imposed if the primary access method is permanently lost? What is the risk of the data being lost/stolen versus the inability to access that data?
If you need help with a disk encrypytion solution, please contact Trigon today!
Concerned about social media and its impact to your corporate security? If not, you ought to be. Proliferation of social media has made it a fact of life. It’s everywhere and everyone’s using it. Companies leverage it, users depend on it, and hackers try to exploit it.
So what’s the risk, why all the fuss? Well for starters, the exposure. The majority of your employees will likely check their personal social media webpages multiple times a day and may be spending far more time doing so than you’d feel comfortable with as a business owner or IT manager. Additionally, your Marketing and HR departments are probably doing the same, albeit for useful purposes with the company’s best interest in mind. The number of people and times they are accessing social media only add to your company’s exposure level.
With all this exposure, what are the actual security risks? They vary and run the gamut from individual identity theft to network breaches. They may include:
- Legal Ramifications – This includes individual postings or activities performed while at work and may expose the company to potential liability. Activities may involve sexual harassment and cyber bullying.
- Malware Attacks – Social media websites provide a gateway for malware. Exposure grows significantly with the combined use of standard workstation computers, smartphones, and tablets. Malware, viruses, and spyware are all potential risks.
- Reputation Damage – Posting derogatory messages or inappropriate photos that may jeopardize a company’s reputation. Damage of this nature may cause varying long term issues for a company to resolve and move beyond.
- Identity Theft – Besides stealing an individual’s personal identification information, hackers may also target individual’s business identities. This information may be used to falsely represent one’s self as a business representative or to gain access to a business property.
- Proprietary Information/Intellectual Property Theft – This may include a user’s reference to a company project or providing detailed information about an upcoming new company product or development strategy. Critical aspects concerning jobs, products, and strategies are all business-owned elements that must be protected.
Now that you have a feel for the exposure and risks levels, what can you do to mitigate the risk to your company? The first option may be to simply block all social media on your network, but that of course would mean your company can’t utilize the benefits and opportunities social media offers at the business level. A second option may be to restrict who in your organization has access, but again you’re losing benefits of social media by limiting who may access it. A third, and now more widely accepted option, is access and education. Many companies now realize social media offers too many business advantages not to fully leverage it. These include improved communication, enhanced marketing, and increased business awareness.
What are some of the primary considerations in implementing an employee access and education program? First, develop and implement a Social Media Policy for your employees, which includes a clearly defined set of guidelines with examples. At a high level, this policy should cover all business data, employee, and webpage classifications/restrictions. Your employees need to understand which elements of company information may be used, for what, and where. Second, start educating your employees on the policy itself and a “best practices” approach to the use of social media. This combined focus should cover both your defined guidelines and general common sense use to avoid risks. Provide periodic refresher sessions for reinforcement and coverage of any new threats or risks. A detailed Social Media Policy coupled with a continuing education plan will help ensure your corporate security from the potential perils of social media.
If you have any questions, Trigon presents Security Awareness Training on this topic for clients of all sizes in the Central Pennsylvania and Philadelphia region. Please feel free to contact us if you have any questions!