Ex Parte Krishnamurthy et alDownload PDFBoard of Patent Appeals and InterferencesOct 19, 200910955017 (B.P.A.I. Oct. 19, 2009) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE _____________ BEFORE THE BOARD OF PATENT APPEALS AND INTERFERENCES _____________ Ex parte RAGHU KRISHNAMURTHY, GOPAL SHARMA and AMITAVA GUHA _____________ Appeal 2009-0017491 Application 10/955,017 Technology Center 2100 ______________ Decided: October 19, 2009 _______________ Before JOHN C. MARTIN, ST. JOHN COURTENAY III, and THU A. DANG, Administrative Patent Judges. MARTIN, Administrative Patent Judge. DECISION ON APPEAL 1 The real party in interest is VERITAS Operating Corporation. Appeal 2009-001749 Application 10/955,017 2 STATEMENT OF THE CASE This is an appeal under 35 U.S.C. § 134(a) from the Examiner’s rejection of claims 1-24 which are all of the pending claims. We have jurisdiction under 35 U.S.C. § 6(b). We reverse. A. Appellants’ invention Appellants’ invention relates to techniques for resynchronizing mirrored volumes in storage systems. Specification ¶ 0001. Figure 1 is reproduced below. Figure 1 is a block diagram illustrating one embodiment of a method of data block resynchronization. Id. at ¶ 0009. System 10 includes a primary site 100 coupled to a secondary site 101 via a data link 130. Id. at Appeal 2009-001749 Application 10/955,017 3 ¶ 0016. Primary site 100 and secondary site 101 respectively include storage management devices 110a-b, while storage management devices 110a-b are in turn associated with respective volumes 120a-b. Id. Figure 3 (not reproduced below) is a flow chart showing one embodiment of Appellants’ method of data block resynchronization. Id. at ¶ 0042. Operation begins in block 300, where a data link failure between primary site 100 and secondary site 101 occurs. Id. Subsequent to the data link failure, data blocks that are written to volume 120a at primary site 100 are tracked, for example via a bitmap or a log (block 302). Id. After the failed data link has been restored or an alternate link provisioned, a storage management device associated with volume 120a at primary site 100 conveys an indication of any data blocks that were written to volume 120a subsequent to the data link failure to a storage management device associated with a mirror volume 120b at secondary site 101 (block 304). Id. at ¶ 0043. In response, the storage management device associated with volume 120b creates a rollback snapshot of those data blocks (i.e., of their contents) prior to the start of resynchronization (block 306). Id. at ¶ 0044. Subsequently, resynchronization of the indicated data blocks begins (block 308). Id. at ¶ 0045. If (block 310) no failure occurs during resynchronization, resynchronization completes successfully and the consistency of volumes 120a and 120b is restored (block 312). Id. at ¶ 0046. If, on the other hand, a failure is detected (block 310) during resynchronization, volume 120b is restored to its original state prior to the beginning of resynchronization from the rollback snapshot (block 314). Id. Appeal 2009-001749 Application 10/955,017 4 B. The claims The independent claims before us are claims 1, 9, and 17, of which claim 1 reads as follows: 1. A system, comprising: a first volume located at a primary site and associated with a first storage management device; and a second volume located at a secondary site and associated with a second storage management device, wherein said secondary site is coupled to said primary site via a data link, and wherein said second volume is configured as a mirror of said first volume; wherein in response to detecting a restoration of communication between said primary and said secondary sites following a failure of said data link, said first storage management device is configured to convey to said second storage management device an indication of a data block written to said first volume subsequent to said failure of said data link; and wherein dependent upon said indication, said second storage management device is configured to create a snapshot of said data block as stored on said second volume prior to resynchronization of said data block. (Claims App., Br. 25.) C. The references and rejections2 2 A rejection of claims 7, 15, and 23 under 35 U.S.C. § 112, second paragraph given at page 18 of the Final Action was withdrawn at page 2 of (Continued on next page.) Appeal 2009-001749 Application 10/955,017 5 The Examiner relies on the following references: DeKoning US 6,691,245 B1 Feb. 10, 2004 Anderson et al. (“Anderson”) US 7,139,808 B2 Nov. 21, 2006 Claims 1-6, 9-14, and 17-22 stand rejected under 35 U.S.C. § 102(e) for anticipation by DeKoning. Claims 7, 8, 15, 16, 23, and 24 stand rejected under 35 U.S.C. § 103(a) for obviousness over DeKoning in view of Anderson. THE ISSUES Appellants have the burden on appeal to show reversible error by the Examiner in maintaining the rejections. See Gechter v. Davidson, 116 F.3d 1454, 1460 (Fed. Cir. 1997) (("[W]e expect that the Board's anticipation analysis be conducted on a limitation by limitation basis, with specific fact findings for each contested limitation and satisfactory explanations for such findings.") (emphasis added)); In re Kahn, 441 F.3d 977, 985-86 (Fed. Cir. 2006) ((“On appeal to the Board, an applicant can overcome a rejection [for obviousness] by showing insufficient evidence of prima facie obviousness or by rebutting the prima facie case with evidence of secondary indicia of nonobviousness.” (citation omitted)). The principal issue before us is whether Appellants have shown error in the Examiner’s finding that DeKoning discloses detection of the the Answer. Appeal 2009-001749 Application 10/955,017 6 restoration of communication between the local and remote sites following a failover condition. ANALYSIS “To anticipate a claim, a prior art reference must disclose every limitation of the claimed invention, either explicitly or inherently.” In re Schreiber, 128 F.3d 1473, 1477 (Fed. Cir. 1997). Under the principles of inherency, if the prior art necessarily functions in accordance with, or includes, the claimed limitations, it anticipates. In re King, 801 F.2d 1324, 1326 (Fed. Cir. 1986). “Inherency . . . may not be established by probabilities or possibilities. The mere fact that a certain thing may result from a given set of circumstance is not sufficient.” In re Oelrich, 666 F.2d 578, 581 (1981). DeKoning discloses a technique for host-initiated synchronization of data that is stored on both a local storage device and a remote mirroring fail- over storage device. DeKoning, col. 1, ll. 8-11. Appellants argue that the rejection is improper because DeKoning fails to disclose any of the following: (1) detecting the restoration of communication between sites following a failure of the data link between them; (2) conveying an indication of a data block written to a first volume subsequent to the failure of the data link; or (3) creating a snapshot of the data block as stored prior to resynchronization of the data block, where the snapshot is created after the failed data link has been restored. (Br. 13, 15, 17.) DeKoning’s Figure 1 is reproduced below. Appeal 2009-001749 Application 10/955,017 7 Figure 1 is a block diagram of a computer system with a mirrored storage system incorporating DeKoning’s invention. Id. at col. 4, ll. 48-49. To ensure continuity of enterprise operations, the client devices 104 utilize the remote host device 109 and the remote storage device 110 as a fail-over storage system in the event of a failure of the local storage device 108 and/or the local host device 106, such as a failure due to a power failure, a flood, an earthquake, etc. Id. at col. 5, ll. 34-39. The local host device 106 periodically initiates a “checkpoint,” which is a procedure to synchronize data stored throughout the mirrored storage system 102. Id. at col. 5, ll. 58-62. Checkpoint information 116 describing the latest checkpoint state is passed in a message from host device 106 to local storage device 108 and then to remote storage device 110 and is maintained in each device 106, 108, and 110. Id. at col. 6, ll. 3-7. The Appeal 2009-001749 Application 10/955,017 8 checkpoint information 116 describes the known coherent state of the data or file system by referencing all prior I/O (Input/Output) operations so that the remote storage device 110 knows exactly which data was coherent at the time of the checkpoint. Id. at col. 6, ll. 12-15. The checkpoint procedure can be periodically initiated by the local host device 106, typically upon each “data cache flush” procedure, in which the data stored in a cache memory 172 in the local host device 106 is sent to the local storage device 108. Id. at col. 8, ll. 48-52. Subsequent “write” procedures to the local storage device 108 by the local host device 106 lead to synchronization updates of the remote storage device 110, i.e., all new written data is forwarded to the remote storage device 110 for mirrored storage updating. Id. at col. 7, l. 63 to col. 8, l. 1.3 When the new data replaces data that was present in the remote storage device 110 at the last synchronization, or checkpoint, the preexisting replaced data is transferred to the snapshot repository 146. Id. at col. 8, ll. 3- 7. As a result, the preexisting data is maintained and can be restored later if a fail-over condition occurs and the remote host device 109 instructs the remote storage device 110 to roll back to the last checkpoint state. Id. at col. 8, ll. 7-12. When the remote storage device 110 receives checkpoint information 116, it clears or deletes the old data from the snapshot repository related to 3 Alternatively, the updates between the local and remote storage devices 108 and 110 can occur at predetermined periodic intervals. DeKoning, (Continued on next page.) Appeal 2009-001749 Application 10/955,017 9 the affected data volumes and begins a new snapshot for the corresponding data volumes. Id. at col. 8, l. 64 to col. 9, l. 2. DeKoning’s Figure 5 is reproduced below. Figure 5 is a diagram of the flow of data within the mirrored storage system shown in Figure 1 during execution of a fail-over procedure. Id. at col. 4, ll. 62-64. When the business continuance client 115 detects a failure condition in the local host and/or storage devices 106 and/or 108, such as a failure by the local storage device 108 to respond to access requests, business continuance client 115 sends a fail-over signal 176 to the remote host device 109, thereby instructing the remote host device 109 to take over servicing data access requests for the affected data volumes 126. Id. at col. 9, ll. 18-25. The remote host device 109 then sends a roll-back signal 177 to the remote storage device 110, thereby instructing the remote storage device 110 to begin to roll back the state of the affected data volumes 126 to the state of the last or selected checkpoint so that the remote storage device 110 col. 8, ll. 1-3. Appeal 2009-001749 Application 10/955,017 10 can become the primary storage device for the data stored in the mirrored volumes 132 and 134 (Fig. 2). Id. at col. 9, ll. 25-31. Specifically, based on the snapshot data stored in the snapshot repository 146 and the checkpoint information 116, the remote storage device 110 assembles an “image” 178 of the affected data volume(s) 126 (Fig. 2) that is consistent with the state of the stored data as indicated by the last checkpoint information 116 for the affected volumes 126. Id. at col. 9, ll. 32-37. The remote storage device 110 replaces the data in the affected data volumes 126 with the volume image. Id. at col. 9, ll. 39-41. Regarding the recitation in claim 1 of a “failure of said data link [between the primary and secondary sites],” Appellants do not challenge the Examiner’s finding that a natural consequence of a power failure, which is one of the failure conditions detected in DeKoning (col. 5, ll. 34-39), would be the loss of communications between DeKoning’s local and remote sites. (Answer 17.) Regarding the recitation in claim 1 of “detecting a restoration of communication between said primary and said secondary sites following a failure of said data link,” the Examiner found that “DeKoning discloses synchronization of remote and local storage devices (108) and (110) upon a failover condition, DeKoning, col. 8, lines 1-12. In order for DeKoning to perform the disclosed synchronization after a failover condition, a communication link must be established between the remote and local storage devices.” Advisory Action dated July 20, 2007, at 2 (emphasis added). However, as correctly pointed out by Appellants, the Appeal 2009-001749 Application 10/955,017 11 synchronization procedure described in lines 1-12 of column 8 occurs prior to detecting a failover condition. (Br. 14.) Specifically, lines 7-10 of column 8 explain that [b]y transferring the preexisting replaced data to the snapshot repository 146, the preexisting data is maintained and can be restored later if a fail-over condition occurs and the remote host device 109 has to instruct the remote storage device 110 to roll back to the last checkpoint state. In the Answer, the Examiner further found that “[s]urely, DeKoning intended that the local host device that stores data on a local storage device be restored after a failover scenario” (Answer 18), that concluding otherwise “would go against DeKoning's fundamental purpose of ‘continu[ing] operations from, a stable, coherent state in the event of a failure of the local storage device,’ DeKoning, Col. 1, lines 9-17,” (id.), and that “[w]ithout a secondary site as a backup, the primary site could suffer a failure and all operations would cease. This is exactly what DeKoning was designed to prevent.” (Id.) We agree with Appellants that these findings and the similar findings at pages 18-19 of the Answer regarding how DeKoning’s system will function in the event a detected failover condition ceases to exist lack support in DeKoning, which Appellants accurately characterize as “omit[ting] any discussion as to the aftermath of the initial failover” (Reply Br. 4) and as “[n]owhere . . . assert[ing] that the purpose of the disclosure is to recover from every conceivable failure scenario. Rather, DeKoning outlines one particular sequence of events leading up to failover, and then Appeal 2009-001749 Application 10/955,017 12 stops” (id. at 3). As a result, Appellants are correct to find that “[i]t is entirely conceivable that DeKoning's system can tolerate only one failure and consequent failover, and thereafter would be vulnerable to a second failure.” (Id.) For the foregoing reasons, Appellants have shown that the Examiner erred in finding that DeKoning discloses detecting the restoration of communication between sites following a failure of the data link between them. A fortiori, Appellants have also shown that the Examiner erred in finding that DeKoning additionally discloses: (a) conveying an indication of a data block written to a first volume subsequent to the failure of the data link; and (b) creating a snapshot of the data block as stored prior to resynchronization of the data block, where the snapshot is created after the failed data link has been restored. The anticipation rejection of claim 1 is therefore reversed, as is the anticipation rejection of independent claims 9 and 17, which recite similar limitations, and the anticipation rejection of dependent claims 2-6, 10-14, and 18-22. The rejection of dependent claims 7, 8, 15, 16, 23, and 24 for obviousness over DeKoning in view of Anderson is reversed because the subject matter relied on in Anderson by the Examiner does not cure the above-identified deficiencies in DeKoning. DECISION The rejection of claims 1-6, 9-14, and 17-22 under 35 U.S.C. § 102(e) Appeal 2009-001749 Application 10/955,017 13 for anticipation by DeKoning is reversed. The rejection of claims 7, 8, 15, 16, 23, and 24 under 35 U.S.C. § 103(a) for obviousness over DeKoning in view of Anderson is reversed. REVERSED gvw MEYERTONS, HOOD, KIVLIN, KOWERT, GOETZEL/SYMANTEC P.O. Box 398 Austin, TX 78767-0398 Copy with citationCopy as parenthetical citation