NetApp, Inc., formerly Network Appliance, Inc., is an American computer storage and data management company headquartered in Sunnyvale, California. It is a member of the NASDAQ-100. It was ranked on the Fortune 500 for the first time in 2012. Wikipedia.

Time filter

Source Type

Methods and systems for tracking information that is transferred from a source to a destination storage system are provided. The source storage system maintains a first data structure for indicating that a storage block has been transferred. The destination storage system receives the storage block and updates a second data structure to indicate that the storage block has been received. The first data structure and the second data structure are compared to determine that the storage block was successfully transferred from the source storage system and received by the destination storage system.

NetApp | Date: 2016-12-04

One or more techniques and/or systems are provided for cluster configuration information replication, managing cluster-wide service agents, and/or for cluster-wide outage detection. In an example of cluster configuration information replication, a replication workflow corresponding to a storage operation implemented for a storage object (e.g., renaming of a volume) of a first cluster may be transferred to a second storage cluster for selectively implementation. In an example of managing cluster-wide service agents, cluster-wide service agents are deployed to nodes of a cluster storage environment, where a master agent actively processes cluster service calls and standby agents passively wait for reassignment as a failover master in the event the master agent fails. In an example of cluster-wide outage detection, a cluster-wide outage may be determined for a cluster storage environment based upon a number of inaccessible nodes satisfying a cluster outage detection metric.

A storage management computing device obtains an information lifecycle management (ILM) policy. A data protection scheme to be applied at a storage node computing device level is determined and a plurality of storage node computing devices are identified based on an application of the ILM policy to metadata received from one of the storage node computing devices and associated with an object ingested by the one of the storage node computing devices. The one of the storage node computing devices is instructed to generate one or more copies of the object or fragments of the object according to the data protection scheme and to distribute the object copies or one of the object fragments to one or more other of the storage node computing devices to be stored by at least the one or more other storage node computing devices on one or more disk storage devices.

A rate matching technique may be configured to adjust a rate of cleaning of one or more selected segments of the storage array to accommodate a variable rate of incoming workload processed by a storage input/output (I/O) stack executing on one or more nodes of a cluster. An extent store layer of the storage I/O stack may clean a segment in accordance with segment cleaning which, illustratively, may be embodied as a segment cleaning process. The rate matching technique may be implemented as a feedback control mechanism configured to adjust the segment cleaning process based on the incoming workload. Components of the feedback control mechanism may include one or more weight schedulers and various accounting data structures, e.g., counters, configured to track the progress of segment cleaning and free space usage. The counters may also be used to balance the rates of segment cleaning and incoming I/O workload, which may change depending upon an incoming I/O rate. When the incoming I/O rate changes, the rate of segment cleaning may be adjusted accordingly to ensure that rates are substantially balanced.

A low-overhead merge technique enables restart of a merge operation with minimal logging of state information relating to progress of the merge operation by a volume layer of a storage input/output (I/O) stack executing on one or more nodes of a cluster. The technique enables restart of the merge operation by ensuring that metadata, i.e., metadata pages, generated during the merge operation is not subject to de-duplication by providing a unique value in each metadata page that distinguishes the page, i.e., renders the page distinct or unique, from other metadata pages in an extent store. In addition, the technique ensures that a reference count on each metadata page is a value denoting a lack of de-duplication. To that end, the extent store layer is configured to not increment the reference count for a metadata page if, during the merge operation, the page is identical (and thus subject to deduplication) to an existing metadata page in the extent store.

NetApp | Date: 2017-07-19

An optimized segment cleaning technique is configured to efficiently clean one or more selected portions or segments of a storage array coupled to one or more nodes of a cluster. A bottom-up approach of the segment cleaning technique is configured to read all blocks of a segment to be cleaned (i.e., an old segment) to locate extents stored on the SSDs of the old segment and examine extent metadata to determine whether the extents are valid and, if so, relocate the valid extents to a segment being written (i.e., a new segment). A top-down approach of the segment cleaning technique obviates reading of the blocks of the old segment to locate the extents and, instead, examines the extent metadata to determine the valid extents of the old segment. A hybrid approach may extend the top-down approach to include only full stripe read operations needed for relocation and reconstruction of blocks as well as retrieval of valid extents from the stripes, while also avoiding any unnecessary read operations of the bottom-down approach.

NetApp | Date: 2017-07-05

Data consistency and availability can be provided at the granularity of logical storage objects in storage solutions that use storage virtualization in clustered storage environments. To ensure consistency of data across different storage elements, synchronization is performed across the different storage elements. Changes to data are synchronized across storage elements in different clusters by propagating the changes from a primary logical storage object to a secondary logical storage object. To satisfy the strictest RPOs while maintaining performance, change requests are intercepted prior to being sent to a filesystem that hosts the primary logical storage object and propagated to a different managing storage element associated with the secondary logical storage object.

A storage management computing device, method and non-transitory computer readable medium that persist data on non-volatile memory includes maintaining a data storage structure comprising multiple nodes on non-volatile memory in at least one storage server. A determination is made when a received key in an update matches an existing key in one of the multiple nodes in the data storage structure. When the determination indicates the match, the update is provided for insertion in a slot in a vector extending from the existing key in the one of the multiple nodes for the data storage structure which matches the received key.

A passive state storage controller monitors a plurality of active state storage controllers to determine when a failure of at least one of the active state storage controllers occurs. Based on a determination of a failure, the passive state storage controller remaps storage devices to the passive state storage controller from the failed storage controller. The passive state storage controller may also remap network interfaces. The passive state storage controller retrieves a transaction log of the failed storage controller from a transaction log database, and replays transactions in the retrieved transaction log. The passive state storage controller switches to operating in an active state.

Techniques for faster reconstruction of segments using a dedicated spare memory unit are described. Zone segments in memory units are associated with a dedicated spare memory unit. The zone segments are reconstructed in the dedicated spare memory unit in response to a failed memory unit except for an identified failed zone segment of the failed memory unit. The identified failed zone segment of the failed memory unit is retained in the dedicated spare unit. Other embodiments are described and claimed.

Loading NetApp collaborators
Loading NetApp collaborators