Ex Parte King et alDownload PDFPatent Trial and Appeal BoardNov 27, 201813675844 (P.T.A.B. Nov. 27, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 13/675,844 11/13/2012 108982 7590 11/29/2018 Wolfe-SBMC 116 W. Pacific A venue Suite 200 Spokane, WA 99201 FIRST NAMED INVENTOR Brian John King UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 2765US01 9952 EXAMINER JEREZ LORA, WILLIAM A ART UNIT PAPER NUMBER 2654 NOTIFICATION DATE DELIVERY MODE 11/29/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): docket@sbmc-law.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte BRIAN JOHN KING, GAUTHAM J. MYSORE, and PARIS SMARAGDIS Appeal2018-000362 Application 13/675,844 Technology Center 2600 Before JOHN A. EVANS, LINZY T. McCARTNEY, and JESSICA C. KAISER, Administrative Patent Judges. McCARTNEY, Administrative Patent Judge. DECISION ON APPEAL Appellants seek review under 35 U.S.C. § 134 of the Examiner's rejections of claims 1-3, 5-19, 21, and 22. We have jurisdiction under 35 U.S.C. § 6(b ). We affirm. Appeal2018-000362 Application 13/675,844 BACKGROUND The present patent application concerns "[t]ime interval sound alignment techniques." Specification ,r 3, filed November 13, 2012. Claims 1, 10, and 17 are independent. Claim 1 illustrates the claimed invention: 1. A method implemented by one or more computing devices, the method comprising: receiving one or more inputs via interaction with a user interface that indicate that a first time interval in a first representation of sound data generated from a first sound signal corresponds to a second time interval in a second representation of sound data generated from a second sound signal; calculating a stretch value based on an amount of time represented in the first and second time intervals, respectively; and generating aligned sound data from the sound data for the first and second time intervals based on the calculated stretch value, such that the generating is performed regardless of the alignment of spectral characteristics identified in the sound data of the first and second time intervals, respectively. Appeal Brief 10, filed December 12, 2016 ("App. Br."). 1-3,5-7,9--12, 14--19, 21 8,22 13 REJECTIONS § 103(a) § I03(a) § I03(a) 1 SONAR XI Reference Guide (copyright 2010). 2 Slaney (US 5,749,073; May 5, 1998). 3 Fevrier et al. (US 2005/0198448 Al; Sept. 8, 2005). 2 SONAR1 SONAR, Slaney2 SONAR, Fevrier3 Appeal2018-000362 Application 13/675,844 DISCUSSION We have reviewed the Examiner's rejections and Appellants' arguments, and we disagree with Appellants that the Examiner erred. As consistent with the discussion below, we adopt the Examiner's reasoning, findings, and conclusions for the rejections in the Final Office Action mailed July 19, 2016 ("Final Act.") and the Answer mailed April 13, 2017 ("Ans."). Claim 1 recites "generating aligned sound data from the sound data for the first and second time intervals based on the calculated stretch value, such that the generating is performed regardless of the alignment of spectral characteristics identified in the sound data of the first and second time intervals, respectively." App. Br. 10 ( emphasis added). Appellants argue SONAR does not teach or suggest the emphasized element because SONAR "does in fact regard heavily the alignment of the spectral characteristics" when aligning sound data. App. Br. 7. Appellants point out that SONAR states "[ w ]hen stretching or quantizing multi-track audio, it is critical to maintain the phase relationships of the original recording" and assert SONAR aims "to sync up timing of musical tracks by aligning the transient peaks that are out of alignment." App. Br. 6, 7. Appellants assert that SONAR therefore "cannot replace sound data with overdub audio" as described in the written description. App. Br. 7. We find Appellants' arguments unpersuasive. Although SONAR discusses maintaining phase relationships and aligning transient peaks when editing sound data, as found by the Examiner, SONAR also suggests aligning sound data without considering the data's spectral characteristics. SONAR discloses a program called AudioSnap and provides an example of using Audio Snap "[ t Jo adjust the timing of a multi-track [audio] performance while maintaining phase relationships." SONAR 595. 3 Appeal2018-000362 Application 13/675,844 According to SONAR, AudioSnap can find transients ("areas where the level increases suddenly") in multi-track audio, place transient markers in the tracks, and use the transient markers to align the tracks. See SONAR 573, 595-99. As noted by Appellants, SONAR discloses "[ w ]hen stretching or quantizing multi-track audio, it is critical to maintain the phase relationships of the original recording" and describes using transient markers to do so. SONAR 595-99. SONAR explains that although "AudioSnap finds transients automatically ... transient markers don't always appear exactly where you might want them for the kind of editing you want to do" and "sometimes you achieve the best results by editing the markers manually." SONAR 573. SONAR discloses "[y]ou can edit the markers by moving them to new locations, adding markers, filtering out markers, deleting markers, and promoting markers." SONAR 573. Given these disclosures, the Examiner determined that even though SONAR does not "explicitly state" the disputed element, one of ordinary skill in the art would have found it obvious to manually edit AudioSnap's transient markers and then run AudioSnap for the manually edited markers. Final Act. 4 ( citing SONAR 573). The Examiner found that because an AudioSnap user would not have been "limited to using spectral characteristic[s] identified in the sound data" to manually edit the markers, running AudioSnap with the manually edited markers would have generated aligned sound data in the claimed manner. See Final Act. 4--5 ( citing SONAR 573, 595); see also Ans. 13. SONAR provides adequate support for the Examiner's determinations. SONAR explains that although AudioSnap can automatically find transients and edit transient markers, the transient markers "don't always appear exactly where you might want them/or the 4 Appeal2018-000362 Application 13/675,844 kind of editing you want to do" and "sometimes you achieve the best results by editing the markers manually." SONAR 573 (emphasis added). To allow users to perform "the kind of editing [they] want to do," SONAR teaches that users can edit transient markers by, among other things, "moving them to new locations." SONAR 573. The cited portions of SONAR do not limit how or where a user can move the transient markers. In fact, by stating that users sometimes need to move transient markers "for the kind of editing [they] want to do," SONAR suggests that users can move transient markers wherever their editing needs require. SONAR's disclosure that "[w]hen stretching or quantizing multi-track audio, it is critical to maintain the phase relationships of the original recording" does not undermine the Examiner's analysis. Even assuming Audio Snap users generally want to maintain the phase relationships of the original recording when adjusting multi-track audio, SONAR suggests users may want to adjust AudioSnap's transient markers "for the kind of editing [they] want to do" and explains that users can do so by manually moving the markers. SONAR 573. And, as discussed above, SONAR suggests users can move the markers without considering the spectral characteristics of sound data. See SONAR 573. Thus, even if users often want to maintain phase relationships when aligning sound data, SONAR suggests that users can employ AudioSnap to align sound data without maintaining these phase relationships. Finally, claim 1 does not recite replacing sound data with overdub audio, so we find Appellants' argument on this point unpersuasive. See In re Self, 671 F.2d 1344, 1348 (CCPA 1982) ("Many of appellant's arguments fail from the outset because, as the solicitor has pointed out, they are not based on limitations appearing in the claims."). Moreover, this argument 5 Appeal2018-000362 Application 13/675,844 rests on Appellants' assertion that SONAR requires maintaining a phase relationship when adjusting audio, but as explained above, SONAR suggests otherwise. For the above reasons, we sustain the Examiner's rejection of claim 1. Because Appellants have not presented separate, persuasive arguments for claims 2, 3, 5-19, 21, and 22, we also sustain the Examiner's rejections of these claims. 1-3 5-7 9-, ' 12, 14--19, 21 8,22 13 Summary § 103(a) § 103(a) § 103(a) CONCLUSION SONAR SONAR, Slane SONAR, Fevrier 1-3 5-7 9-, ' 12, 14--19, 21 8,22 13 1-3 5-19 ' ' 21,22 No period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED 6 Copy with citationCopy as parenthetical citation