NetApp, Inc., formerly Network Appliance, Inc., is an American computer storage and data management company headquartered in Sunnyvale, California. It is a member of the NASDAQ-100. It was ranked on the Fortune 500 for the first time in 2012. Wikipedia.


Time filter

Source Type

Patent
NetApp | Date: 2016-06-03

Methods and apparatuses for updating members of a data storage reliability group are provided. In one exemplary method, a reliability group includes a data zone in a first storage node and a checksum zone in a second data storage node. The method includes updating a version counter associated with the data zone in response to destaging a data object from a staging area of the data zone to a store area of the data zone without synchronizing the destaging with the state of the checksum zone. The method further includes transmitting, from the data zone to the checksum zone, an update message indicating completion of the destaging of the data object, wherein the update message includes a current value of the version counter.


Patent
NetApp | Date: 2016-03-01

Methods and systems for a storage environment are provided. A policy for an input/output (I/O) stream having a plurality of I/O requests for accessing storage at a storage device of the storage sub-system is translated into flow attributes so that the I/O stream can be assigned to one of a plurality of queues maintained for placing I/O requests based on varying priorities defined by set polices. When an I/O request for the associated policy is received by the storage sub-system; the storage sub-system determines a flow attribute associated with the I/O request and the policy; selects a queue for staging the I/O request, such that the selected queue is of either higher priority than what is indicated by the flow attribute or at least of a same priority as indicated by the flow attribute; and allocates storage sub-system resource for processing the received I/O request.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: ICT-07-2014 | Award Amount: 6.94M | Year: 2015

The SSICLOPS project will focus on techniques for the management of federated private cloud infrastructures, in particular cloud networking techniques (within software-defined data centres and across wide-area networks). Key deliverables from the project will include a meta data description language for workloads, resources and policies, a flexible scheduling system using meta data, workload-specific adaptations to TCP/IP stacks, and data center performance analysis tools. Addressing topics, such as dynamic configuration, automated provisioning and orchestration of cloud resources the SSICLOPS projects will investigate high-performance, vertically integrated network stacks for intra/inter-cloud communication and efficient, scalable, and secure intra/inter-DC and client-facing transport mechanisms. The project will design, implement, demonstrate, and evaluate three specific use cases, namely a cloud-based in-memory database, the analysis of physics experiment data, and the prototypical extension of network stacks for a telecom provider in the SSICLOPS testbed.


A system and method of cache monitoring in storage systems includes storing storage blocks in a cache memory. Each of the storage blocks is associated with status indicators. As requests are received at the cache memory, the requests are processed and the status indicators associated with the storage blocks are updated in response to the processing of the requests. One or more storage blocks are selected for eviction when a storage block limit is reached. As ones of the selected one or more storage blocks are evicted from the cache memory, the block counters are updated based on the status indicators associated with the evicted storage blocks. Each of the block counters is associated with a corresponding combination of the status indicators. Caching statistics are periodically updated based on the block counters.


Technology is disclosed for improving performance during playback of logged data storage operations. The technology can monitor a log to which data storage operations are written before data is committed to a volume; determine counts of various types of data storage operations; and when the counts exceed a specified threshold, cause the data storage operations to be committed to the volume. Some data storage operations can be coalesced during playback to further improve performance.


Embodiments herein are directed to efficient crash recovery of persistent metadata managed by a volume layer of a storage input/output (I/O) stack executing on one or more nodes of a cluster. Volume metadata managed by the volume layer is organized as a multi-level dense tree, wherein each level of the dense tree includes volume metadata entries for storing the volume metadata. When a level of the dense tree is full, the volume metadata entries of the level are merged with the next lower level of the dense tree. During a merge operation, two sets of generation IDs may be used in accordance with a double buffer arrangement: a first generation ID for the append buffer that is full (i.e., a merge staging buffer) and a second, incremented generation ID for the append buffer that accepts new volume metadata entries. Upon completion of the merge operation, the lower level (e.g., level 1) to which the merge is directed is assigned the generation ID of the merge staging buffer.


A flash-optimized, log-structured layer of a file system of a storage input/output (I/O) stack executes on one or more nodes of a cluster. The log-structured layer of the file system provides sequential storage of data and metadata (i.e., a log-structured layout) on solid state drives (SSDs) of storage arrays in the cluster to reduce write amplification, while leveraging variable compression and variable length data features of the storage I/O stack. The data may be organized as an arbitrary number of variable-length extents of one or more host-visible logical units (LUNs) served by the nodes. The metadata may include mappings from host-visible logical block address ranges (i.e., offset ranges) of a LUN to extent keys, as well as mappings of the extent keys to SSD storage locations of the extents. The storage location of an extent on SSD is effectively virtualized by its mapped extent key (i.e., extent store layer mappings) such that relocation of the extent on SSD does require update to volume layer metadata (i.e., the extent key sufficiently identifies the extent).


Patent
NetApp | Date: 2016-08-22

One or more techniques and/or systems are disclosed for redeploying a baseline VM (BVM) to one or more child VMs (CVMs) by merely cloning virtual drives of the BVM, instead of the entirety of the parent BVM. A temporary directory is created in a datastore that has the target CVMs that are targeted for virtual drive replacement (e.g., are to be re-baselined). One or more replacement virtual drives (RVDs) are created in the temporary directory, where the RVDs comprise a clone of a virtual drive of the source BVM. The one or more RVDs are moved from the temporary directory to a directory of the target CVMs, replacing existing virtual drives of the target CVMs so that the target CVMs are thus re-baselined to the state of the parent BVM.


Techniques to account for storage consumption and capacity allocation across heterogeneous storage objects are disclosed. A capacity accountability system can ascertain a set of heterogeneous storage objects provisioned for a storage consumer, where the heterogeneous storage objects is categorized by storage object hierarchy levels. The capacity accountability system can then identify an association between the storage consumer and a storage object hierarchy level and account for storage object consumption and storage capacity allocation of the storage consumer by normalizing storage consumption data and capacity allocation data at the storage object hierarchy level across the heterogeneous storage objects.


Fault isolation capabilities made available by user space can be provided for a embedded network storage system without sacrificing efficiency. By giving user space processes direct access to specific devices (e.g., network interface cards and storage adapters), processes in a user space can initiate Input/Output requests without issuing system calls (and entering kernel mode). The multiple user spaces processes can initiate requests serviced by a user space device driver by sharing a read-only address space that maps the entire physical memory one-to-one. In addition, a user space process can initiate communication with another user space process by use of transmit and receive queues similar to transmit and receiver queues used by hardware devices. And, a mechanism of ensuring that virtual addresses that work in one address space reference the same physical page in another address space is used.

Loading NetApp collaborators
Loading NetApp collaborators