Cochlear implant s
It has been sugges
SALT LAKE CITY — A
The influence of n
Celebrate St. Patr
Q: how to return
The effect of huma
1. Field of the In
Introduction {#Sec
Q: Converting to

The long term obje
Gift Baskets Freq
All relevant data
A Few Good Reaso
If you’re looking
Q: Android how to
What to know about
Q: JUnit Test cas
/** * Copyright 2
MILAN - Italy's an
The present invention relates generally to computer systems, and more specifically to storage controllers that can be used in large-scale storage systems. As computer memory storage and data bandwidth increase in next generation machines, the variety of workloads accelerated by the machines increases. Performance portability allows users to easily upgrade machine components without modifying applications and their data access patterns. Many software applications, however, exist in the market today, and there is a need to accelerate their execution on the next generation systems. For example, such applications may include a database, a web server, a file server, a large enterprise application (e.g., an enterprise resource planning application), and a business analysis tool (e.g., a customer relationship management application). Because such large-scale applications have various performance requirements, acceleration of these applications is more complicated than with previous generations of machines. A typical storage controller includes a hardware abstraction layer for the software user and hides the internal implementation details of the storage from the user. The software user sees the storage as a high-performance block storage device. The user writes data to this device or requests the data by way of one or more logical interfaces. The storage controller also translates the logical interface into physical locations on the disk that the user accesses. This hardware abstraction is the foundation of storage virtualization. Some storage controllers also provide on-disk data compression to improve I/O performance. Each request, such as a read or write, for a large object in a data storage system may result in creation of several data objects (e.g., disk blocks), some of which may be small. If this creation is done in hardware, this would require hardware duplication of the created objects. For example, if a single read request retrieves data from 4,000 disk blocks, creation of these disk blocks may require about 4,000 cycles (on a host server) to create the disk blocks, thereby degrading performance of the host server. Moreover, in these systems, many read or write requests may be small and only a small subset may generate a large number of small created objects. The overhead associated with creating small objects for large numbers of such requests in such systems affects efficiency and performance of these systems. U.S. Pat. No. 7,003,654 to Adams et al. describes a system for using memory for direct disk storage. In this system, metadata describing an on-disk cache is mapped into memory. The metadata also describes logical or virtual addresses for memory locations that will map to on-disk data structures. As a result, the on-disk data structures are represented in memory. In U.S. Pat. No. 6,418,538 to Moshovitis et al., a memory controller interface is described for dynamically extending memory storage by utilizing a block of contiguous memory for a cache. The interface is achieved through an indirection table that contains a list of memory locations, each of which contains a pointer to a table or list of cache locations within main memory. Cache entries are referenced in this indirection table and are used to supply the processor with memory lines from the cache. U.S. Patent Application Publication No. US 2004/0203090 to Bhavnani et al. describes a similar mechanism. In U.S. Pat. No. 6,405,117 to Moy, a high performance storage server is described that exploits the memory system of the host computer to allow multiple storage servers to access data in the main memory of a host system without host intervention. The storage server utilizes a port of a host system's memory controller to receive read and write requests from one or more network interface cards coupled to the storage server. The memory controller provides these requests to the host computer's system memory for processing. These prior art approaches allow use of some of the memory of a computer to form the cache, thereby improving performance. However, these systems utilize a fixed on-disk-to-main-memory mapping and do not provide dynamic, address-driven mapping of data between the storage controller cache and the underlying data store. These systems may use memory for caching, but these memories are not used for reading from or writing to the data store. In U.S. Pat. No. 6,389,571 to Castanias et al., a system is described that allocates a variable length physical page in main memory to a particular logical address. The system uses the Logical Block Address to identify the desired page. A mapping table is used to map the virtual page number to the real physical page number in memory. When data in the memory is to be evicted, the least recently used page is selected for eviction first in units of whole pages. If the selected page is part of a cache, this cache must be emptied before this eviction can be implemented. This system takes the location of a logical block of data that has been allocated into a particular page and allocates the page into the main memory address space. This approach does not allow the logical blocks of data in this page to be removed from the page while the page is part of the main memory address space. It would be desirable to provide a method and system for managing objects stored on a storage media in a data storage system. It would also be desirable to reduce the amount of disk blocks created by large object requests. It would also be desirable to provide a method and system for determining whether a particular disk block is frequently accessed by a host computer, in which a determination is made as to which blocks will be allowed to persist and which will be evicted from the data storage system. It would further be desirable to avoid copying the entire data in a cache block. It would also be desirable to provide a method and system for performing I/O operations with minimal involvement of a CPU. It would also be desirable to use memory more efficiently. These and other objects may be achieved by a method and system for managing data in a storage device. In one embodiment, the data management method comprises determining whether a new data object is to be created in response to a data storage request; and creating a data object corresponding to the data storage request only if the new data object is determined to be stored in a storage device. In one embodiment, the data management system comprises a storage controller coupled to a storage device for creating a data object corresponding to the data storage request in response to determining that the new data object is to be created; and a disk controller coupled to the storage controller and configured to create a file system for storing the new data object. The file system includes a file entry and additional file entries referencing the new data object. In one embodiment, the new data object is created only if the file system is not found already in the disk controller. In one embodiment, a method for