Ex Parte KimDownload PDFPatent Trial and Appeal BoardJan 29, 201814284005 (P.T.A.B. Jan. 29, 2018) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/284,005 05/21/2014 Pilwon KIM 1398-665 (YPF201307) 1086 66547 7590 01/31/2018 THE FARRELL LAW FIRM, P.C. 290 Broadhollow Road Suite 210E Melville, NY 11747 EXAMINER WANG, YI SHENG ART UNIT PAPER NUMBER 2659 NOTIFICATION DATE DELIVERY MODE 01/31/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): pto @ farrelliplaw. com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte PILWON KIM Appeal 2017-0075681 Application 14/284,005 Technology Center 2600 Before CARL W. WHITEHEAD JR., ADAM J. PYONIN, and AMBER L. HAGY, Administrative Patent Judges. PYONIN, Administrative Patent Judge. DECISION ON APPEAL This is a decision on appeal under 35 U.S.C. § 134(a) from a final rejection of claims 1—3, 5—13, and 15—20. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 Samsung Electronics Co., Ltd. is identified as the real party in interest. Appeal Brief 1. Appeal 2017-007568 Application 14/284,005 STATEMENT OF THE CASE Introduction The Application is directed to a “method and an apparatus for managing audio data in an electronic device, which allow preliminary identification of contents of audio data through a text.” Specification 2:10— 11. Claims 1 and 11 are independent. Claim 1 is reproduced below for reference: 1. A method of managing audio data of an electronic device, the method comprising: converting at least a part of audio data to a text; storing the converted text as preview data of the audio data; adding talker information about a recognized talker to the stored preview data; and displaying the stored preview data in response to a request for a preview of the audio data. References and Rejections Claims 1, 2, 6—12, and 16—20 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Koo (WO 2013/168860 Al; Nov. 14, 2013). Final Action 3. Claims 3, 5, 13, and 15 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Koo and Konig (US 7,487,094 Bl; Feb. 3, 2009). Final Action 10. ANALYSIS We have reviewed the Examiner’s rejections in light of Appellant’s arguments. We have considered in this Decision only those arguments Appellant actually raised in the Briefs. See 37 C.F.R. § 41.37(c)(l)(iv). We 2 Appeal 2017-007568 Application 14/284,005 are not persuaded the Examiner errs; we adopt the Examiner’s findings and conclusions as our own, to the extent consistent with our analysis below. We add the following primarily for emphasis. A. Independent Claim 1 Appellant argues the Examiner errs in rejecting independent claim 1, because “Koo at least fails to teach, suggest, or render obvious adding talker information about a recognized talker to the stored preview data and displaying the stored preview data in response to a request for a preview of the audio data, as recited in Claim 1.” Appeal Brief 3^4 (emphasis omitted). Appellant contends, “[ijnstead, in Koo, as shown and described above, the information is only displayed in a separate pop-up window, and is not shown or described as added to the preview information.” Id. at 5. We are not persuaded of Examiner error. Claim 1 recites, inter alia, storing converted text as preview data of the audio data, adding talker information about a recognized talker to the stored preview data, and displaying the stored preview data. The claim does not provide additional description of the term “adding,” nor does Appellant identify in the Specification any limiting disclosures—such as a particular memory configuration—for the claimed “adding,” “stored” or “storing.” For example, Appellant’s Figures 3A and 3C show displays of preview data (i.e., text converted from audio) with displays of talker information (e.g., pictures, phone numbers, etc., of the person talking). See Specification 11:2—12:11; see also Specification 12:12—15 (“As described above with reference to FIGs. 3 A to 3C, the user of the electronic device 100 according to an embodiment of the present invention can preliminarily identify contents of 3 Appeal 2017-007568 Application 14/284,005 the audio data through the preview data 21, 22, 23, and 24 or the preview popup window 40 without the reproduction of the audio data.”). Koo, similar to Appellant’s method, states “terminal 100 may display information associated with the voice memo on the screen through a pop-up window (1110).” Koo 1340; see also Answer 4. Koo teaches adding talker information to the stored converted text (see Koo 1156) and displaying both (see Koo, Fig. 11C, depicting the display of converted text along with a pop up containing the talker’s name and contact information).2 See Answer 5 (“In Koo, both of the text displays in the partial screen and in pop-up window are information associated with the voice data and can be considered as preview.”). Thus, Appellant does not persuade us the Examiner errs in finding “Koo renders obvious . . . each of the limitations in Claim[] 1.” Id. We sustain the Examiner’s rejection of independent claim 1. For the same reasons, we sustain the Examiner’s rejections of independent claim 11 and dependent claims 2, 5—10, 12, and 15—20, which are not separately argued. See Appeal Brief 3, 8, 11—12. B. Dependent Claim 3 Appellant argues the Examiner errs in rejecting dependent claim 3, because “while Konig describes distinguishing between a Speaker A and a Speaker B in audio of a call,. . . Konig does this by recognizing events spoken about by each of Speaker A and Speaker B (i.e., labeling events spoken by the agent (e.g., Speaker A) versus events spoken by the customer 2 Additionally, we note Koo further discloses displaying both converted text and talker information as part of a file name. See Koo 1159. 4 Appeal 2017-007568 Application 14/284,005 (e.g., Speaker B)),” which “is different than determining who the speakers are by analyzing a tone of the audio, as recited in Claim[] 3.” Appeal Brief 11.” Particularly, Appellant contends “the tone described in Konig relates to non-linguistic events and thus is not analogous to the method of the present application for analyzing the tone of the audio data to convert at least a part of the audio data into text distinguished by respective talkers.” Id. at 9. We are not persuaded the Examiner errs. Claim 3 ultimately depends upon claim 1, and further recites: wherein converting at least the part of the audio data to the text comprises analyzing a tone of the audio data; and wherein storing the displayed text as the preview data of the audio data comprises storing the converted text distinguished by respective talkers according to a result of the analysis of the tone of the audio data. Konig discloses “an acoustical speaker separation is performed to distinguish Speaker A from Speaker B” (Konig 11:16—17) and teaches analyzing the conversation using tones (see, e.g., Konig 3:16—19). See Final Action 10—11. We agree with the Examiner that Konig, thus, teaches or suggests the disputed limitations of claim 3. See Final Act. 10. Appellant’s arguments that “the tone described in Konig relates to non-linguistic events” (Appeal Brief 9) is inapposite, as Konig uses the term “non-linguistic events” to refer to qualities of the audio file that are not “words, phrases, and sentences,” that is, audio qualities other than the spoken words themselves (Konig 3:18; see also id. at 4: 45 46). We find one of ordinary skill would understand Konig as teaching or suggesting the non-linguistic events (including tones) are used in the converting and storing of the audio recording, within the meaning of the claim. See Final Action 3; see also id. at 11; Konig 3:1—20; 7:64—8:13; 11:9—17. 5 Appeal 2017-007568 Application 14/284,005 Separately, we note Koo teaches using tone analysis in converting and storing audio data, as Koo discloses “speaker B converses while calling the other Bob in the state of remembering the voice of the speaker A and the voice of the speaker B (for example, in the state of remembering his or her voice tone, manner of speaking, and the like),” so that “the mobile terminal 100 derives that the name of the speaker A is Bob through the conversation.” Koo 1134. The Examiner finds the “combination of Koo and Konig, teaches, suggests[,] or renders obvious” the limitations of claim 3. Ans. 7. Appellant does not persuade us such finding is erroneous. Accordingly, we sustain the Examiner’s rejection of dependent claim 3, and dependent claim 13 not separately argued. See Appeal Brief 9. DECISION The Examiner’s decision rejecting claims 1—3, 5—13, and 15—20 is affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 6 Copy with citationCopy as parenthetical citation