Ana Sayfa | Yaz?lar? takip   et | Yorumlar?   et

Archive

HARDWARE etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster
HARDWARE etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster

tpm hardware

4 Ocak 2011 Salı

TPM hardware attacks

Filed under: Crypto,Hacking,Hardware,Security — Nate Lawson @ 5:00 am

Trusted Computing has been a controversial addition to PCs since it was first announced as Palladium in 2002. Recently, a group at Dartmouth implemented an attack first described by Bernhard Kauerearlier this year. The attack is very simple, using only a 3-inch piece of wire. As with the Sharpie DRM hack, people are wondering how a system designed by a major industry group over such a long period could be so easily bypassed.

The PC implementation of version 1.1 of the Trusted Computing architecture works as follows. The boot ROM and then BIOS are the first software to run on the CPU. The BIOS stores a hash of the boot loader in the TPM’s PCR before executing it. A TPM-aware boot loader hashes the kernel, appends that value to the PCR, and executes the kernel. This continues on down the chain until the kernel is hashing individual applications.

How does software know it can trust this data? In addition to reading the SHA-1 hash from the PCR, it can ask the TPM to sign the response plus a challenge value using an RSA private key. This allows the software to be certain it’s talking to the actual TPM and no man-in-the-middle is lying about the PCR values. If it doesn’t verify this signature, it’s vulnerable to this MITM attack.

As an aside, the boot loader attack announced by Kumar et al isn’t really an attack on the TPM. They apparently patched the boot loader (a la eEye’s BootRoot) and then leveraged that vantage point to patch the Vista kernel. They got around Vista’s signature check routines by patching them to lie and always say “everything’s ok.” This is the realm of standard software protection and is not relevant to discussion about the TPM.

How does the software know that another component didn’t just overwrite the PCRs with spoofed but valid hashes? PCRs are “extend-only,” meaning they only add new values to the hash chain, they don’t allow overwriting old values. So why couldn’t an attacker just reset the TPM and start over? It’s possible a software attack could cause such a reset if a particular TPM was buggy, but it’s easier to attack the hardware.

The TPM is attached to a very simple bus known as LPC (Low Pin Count). This is the same bus used for Xbox1 modchips. This bus has a 4-bit address/data bus, 33 MHz clock, frame, and reset lines. It’s designed to host low-speed peripherals like serial/parallel ports and keyboard/mouse.

The Dartmouth researchers simply grounded the LPC reset line with a short wire while the system was running. From the video, you can see that the fan control and other components on the bus were also reset along with the TPM but the system keeps running. At this point, the PCRs are clear, just like at boot. Now any software component could store known-good hashes in the TPM, subverting any auditing.

This particular attack was known before the 1.1 spec was released and was addressed in version 1.2 of the specifications. Why did it go unpatched for so long? Because it required non-trivial changes in the chipset and CPU that still aren’t fully deployed.

Next time, we’ll discuss a simple hardware attack that works against version 1.2 TPMs.



Labels:

hardware failover

This document provides a step-by-step example for configuring this feature for SonicOS Enhanced 2.5.0.5 through 3.0.
SonicOS Enhanced 2.5 included significant changes to the method for Hardware Failover (HF). Please review the
applicable SonicOS Enhanced Administration Guide for a full explanation of functionality and requirements.
SonicWALL Hardware Failover provides firewall redundancy. When the primary loses functionality or connectivity, the
backup unit assumes the active role. If preempt is enabled, the role will fail back to the primary unit. The Primary and
Backup SonicWALL devices are currently only capable of performing active/passive Hardware Failover – active/active
failover is not supported at present. Session state is not currently synchronized between the Primary and Backup
SonicWALL security appliances. If a failover occurs, any session that had been active at the time of failover needs to be
renegotiated.
Hardware Failover can be configured with only 1 Public WAN IP address (Virtual IP only) or 3 IP addresses (Virtual IP,
Primary management IP and Backup management IP). Using 3 WAN addresses allows management access to either
Primary or Backup unit whether they are the active unit or not. This can assist in some remote troubleshooting scenarios.
If only 1 public IP is defined, the management interface of the unit running in idle mode will not be accessible via the WAN
interface. The scenario in this document uses 3 WAN IP addresses (Virtual IP and Management IPs), but notes for using
1 WAN IP is included.
Scenario: PRO2040 HF Pair running SonicOS 2.5.0.5e or 3.0.0.8e.
The Primary and Backup SonicWALLs are connected with a crossover cable to a designated interface (X3 for 2040, X5
for 3060, 4060, and 5060) to create the Hardware Failover Link. All synchronization information and the HF heartbeat are
passed through this Hardware Failover Link


Labels:

web server hardware requirements

Designing and building a web server is something that needs the utmost care and attention, simply because an internet based business needs a server that’s reliable and is able to run 24/7 for months without requiring any servicing. This reliability is something which needs to be factored into the design from the start, and is of the utmost importance. Picking reliable components is just as important here as making sure you pick components that fit the purpose your web server is set to fulfill.

Only with a thorough analysis of what content you’ll be serving to your clients, and on what scale, you’ll be able to properly define where possible bottlenecks might arise and pick the right components for the web server without either falling short or building a system that’s overpowered. We need to make sure these bottlenecks are well understood, both in scale and frequency of occurrence and proper measures are in place to limit the effects on the performance of the server and more importantly the experience of the client.

And that’s the primary objective here; we need to make sure the website feels as responsive to the client whether there are one or one-hundred people simultaneously accessing the same content. All that counts is that the website keeps on running regardless of how many clients are being served. To accomplish that we’ll need to dig deeper than just go online, buy a couple of web servers, install the operating system, upload the content and startup the website. In the next few pages we’ll walk you through our design process for the new Hardware Analysis web server, a server designed to serve daily changing content with lots of images, movies, active forums and millions of page views every month.


Labels:

virtual machine hardware

n computing, hardware-assisted virtualization is a platform virtualization approach that enables efficient full virtualization using help from hardware capabilities, primarily from the host processors. Full virtualization is used to simulate a complete hardware environment, or virtual machine, in which an unmodified guest operating system (using the same instruction set as the host machine) executes in complete isolation. Hardware-assisted virtualization was added to x86 processors (Intel VT-x or AMD-V) in 2006.

Hardware-assisted virtualization is also known as accelerated virtualization; Xen calls it hardware virtual machine (HVM), Virtual Ironcalls it native virtualization.

Contents

[hide]

[edit]History

Hardware-assisted virtualization was first introduced on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. Virtualization was eclipsed in the late 1970s, with the advent of minicomputers that allowed for efficient timesharing, and later with the commoditization of microcomputers.

The proliferation of x86 servers rekindled interest in virtualization. The primary driver was the potential for server consolidation: virtualization allowed a single server to replace multiple underutilized dedicated servers.

However, the x86 architecture did not meet the Popek and Goldberg virtualization requirements to achieve “classical virtualization″:

  • equivalence: a program running under the virtual machine monitor(VMM) should exhibit a behavior essentially identical to that demonstrated when running on an equivalent machine directly;
  • resource control (also called safety): the VMM must be in complete control of the virtualized resources;
  • efficiency: a statistically dominant fraction of machine instructions must be executed without VMM intervention.

This made it difficult to implement a virtual machine monitor for this type of processor. Specific limitations included the inability to trap on some privileged instructions.

To compensate for these architectural limitations, virtualization of the x86 architecture has been accomplished through two methods: full virtualization or paravirtualization.[1] Both create the illusion of physical hardware to achieve the goal of operating system independence from the hardware but present some trade-offs in performance and complexity.

Paravirtualization has primarily been used for university research - Denali or Xen.[dubious ][citation needed] The research projects employ this technique to run modified versions of operating systems, for which source code is readily available (such as Linux and FreeBSD). A paravirtualized virtual machine provides a special API requiring substantial OS modifications. The best known commercial implementations of paravirtualization are modified Linux kernels from XenSource and GNU/Linux distributors.

Full virtualization was implemented in first-generation x86 VMMs. It relies on binary translation to trap and virtualize the execution of certain sensitive, non-virtualizable instructions. With this approach, critical instructions are discovered (statically or dynamically at run-time) and replaced with traps into the VMM to be emulated in software. Binary translation can incur a large performance overhead in comparison to a virtual machine running on natively virtualized architectures such as the IBM System/370. VirtualBox and VMware Workstation (for 32-bit guests only), as well as Microsoft Virtual PC, are well-known commercial implementations of full virtualization.

With hardware-assisted virtualization, the VMM can efficiently virtualize the entire x86 instruction set by handling these sensitive instructions using a classic trap-and-emulate model in hardware, as opposed to software.

Intel and AMD came with distinct implementations of hardware-assisted x86 virtualization, Intel VT-x and AMD-V, respectively. On theItanium architecture, hardware-assisted virtualization is known as VT-i.

Well-known implementations of hardware-assisted x86 virtualization include VMware Workstation (for 64-bit guests only), Xen 3.x (including derivatives like Virtual Iron), Linux KVM and Microsoft Hyper-V.



Labels:

telemarketing hardware

Questions About Marketing CharTec For Your Computer Business?

Friday, March 5th, 2010

Recently several of our computer consultants clients have question the benefit of HAAS with CharTec after adding up the cost of the financing. Below are my thoughts about why HAAS is critical to your marketing success as a computer business owner

How much will the average person pay for a car or home after the financing is added to the cost?

If every one cringed about the financing cost then our society’s spending would shrink and so would the economy. Like in Mexico where pretty much every one pays for homes and cars with cash and really only rich people buy anything.

Perhaps there could be an argument that that might be a better arrangement, it would also mean there would be less millionaires because there would be less cash blowing around for entrepreneurs to grab.



Labels:

dedicated server hardware

Hardware requirements for servers vary, depending on the server application. Absolute CPU speed is not usually as critical to a server as it is to a desktop machine[citation needed]. Servers' duties to provide service to many users over a network lead to different requirements like fast network connections and high I/O throughput. Since servers are usually accessed over a network, they may run in headless mode without a monitor or input device. Processes that are not needed for the server's function are not used. Many servers do not have a graphical user interface (GUI) as it is unnecessary and consumes resources that could be allocated elsewhere. Similarly, audio andUSB interfaces may be omitted.

Servers often run for long periods without interruption and availability must often be very high, making hardware reliability and durability extremely important. Although servers can be built from commodity computer parts, mission-critical enterprise servers are ideally very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime, for even a short-term failure can cost more than purchasing and installing the system. For example, it may take only a few minutes of down time at a national stock exchange to justify the expense of entirely replacing the system with something more reliable. Servers may incorporate faster, higher-capacity hard drives, larger computer fans or water cooling to help remove heat, anduninterruptible power supplies that ensure the servers continue to function in the event of a power failure. These components offer higher performance and reliability at a correspondingly higher price. Hardware redundancy—installing more than one instance of modules such aspower supplies and hard disks arranged so that if one fails another is automatically available—is widely used. ECC memory devices that detect and correct errors are used; non-ECC memory is more likely to cause data corruption.[citation needed]

To increase reliability, most of the servers use memory with error detection and correction, redundant disks, redundant power supplies and so on. Such components are also frequently hot swappable, allowing to replace them on the running server without shutting it down. To prevent overheating, servers often have more powerful fans. As servers are usually administered by qualified engineers, their operating systems are also more tuned for stability and performance than for user friendliness and ease of use, Linux taking noticeably larger percentage than for desktop computers.[citation needed]

As servers need stable power supply, good Internet access, increased security and are also noisy, it is usual to store them in dedicated server centers or special rooms. This requires to reduce power consumption as extra energy used generates more heat and the temperature in the room could exceed the acceptable limits. Normally server rooms are equipped with air conditioning devices. Server casings are usually flat and wide, adapted to store many devices next to each other in server rack. Unlike ordinary computers, servers usually can be configured, powered up and down or rebooted remotely, using out-of-band management.

Many servers take a long time for the hardware to start up and load the operating system. Servers often do extensive pre-boot memory testing and verification and startup of remote management services. The hard drive controllers then start up banks of drives sequentially, rather than all at once, so as not to overload the power supply with startup surges, and afterwards they initiate RAID system pre-checks for correct operation of redundancy. It is common for a machine to take several minutes to start up, but it may not need restarting for months or years.



Labels:

Blogger Theme By:GosuBlogger and Araba Modelleri .