Ex Parte Denker et alDownload PDFPatent Trial and Appeal BoardMar 29, 201813631381 (P.T.A.B. Mar. 29, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 13/631,381 09/28/2012 14824 7590 04/02/2018 Moser Taboada/ SRI International 1030 Broad Street Suite 203 Shrewsbury, NJ 07702 FIRST NAMED INVENTOR Grit Denker UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. SRI6252-5 4743 EXAMINER JANSEN II, MICHAEL J ART UNIT PAPER NUMBER 2696 NOTIFICATION DATE DELIVERY MODE 04/02/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address( es): docketing@mtiplaw.com llinardakis@mtiplaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte GRIT DENKER and RUKMAN SEN ANA YAKE Appeal2017-009953 Application 13/631,3 81 Technology Center 2600 Before CARLA M. KRIVAK, BETH Z. SHAW, and ADAM J. PYONIN, Administrative Patent Judges. SHAW, Administrative Patent Judge. DECISION ON APPEAL 1 Appellants2 seek our review under 35 U.S.C. § 134(a) of the Examiner's final rejection of claims 1-11, 13-21, and 23-32, which represent all the pending claims. We have jurisdiction under 35 U.S.C. § 6(b ). We affirm. 1 Throughout this Decision we have considered the Appeal Brief filed February 22, 2017 ("App. Br."), Reply Brief filed July 18, 2017 ("Reply Br."), the Specification filed September 28, 2012 ("Spec."), the Examiner's Answer mailed June 8, 2017 ("Ans."), and the Final Rejection mailed September 22, 2016 ("Final Act."). 2 Appellants identify SRI INTERNATIONAL as the real party in interest (App. Br. 3). Appeal2017-009953 Application 13/631,381 INVENTION Appellants' invention is directed to adapting the presentation of user interface elements based on a contextual user model includes using passive interaction data, such as gaze-tracking inputs or certain proximity inputs, to determine an aspect of the user's current interaction context (e.g., the user's current focus of attention or current hand position). User interface elements may be changed or relocated based on the user's current interaction context. Spec., Abstract. Claim 1 is illustrative of the claims at issue and is reproduced below: 1. One or more non-transitory machine-readable media encoded with an adaptive presentation module to adapt the presentation of user interface elements at a computing device, comprising executable instructions configured to: access a contextual user model, the contextual user model comprising user interaction data stored over time, the user interaction data relating to a plurality of user interactions occurring prior to a current user interaction with the computing device, the user interaction data relating to: gaze-tracking inputs received at the computing device, on-screen locations corresponding to each of the gaze- tracking inputs, content displayed at the on-screen locations, and software application events related to the content, proximity inputs received at the computing device, and hand positions corresponding to each of the proximity inputs; based on at least some of the user interaction data, determine an area of visual focus of the current user interaction; determine that an on-screen visual element is located outside the area of current visual focus; based on at least some of the prior user interaction data in the contextual user model, determining a relevance of the 2 Appeal2017-009953 Application 13/631,381 detected on-screen visual element to at least one software application event in the area of current visual focus; and based on the determined relevance, change the presentation of the detected on-screen visual element by relocating the on-screen visual element closer to the hand locations based on the proximity inputs and outside the area of current visual focus. REJECTIONS The Examiner rejected claims 1-11, 13-21, and 23-32 under 35 U.S.C. § 112 (pre-AIA), first paragraph, as failing to comply with the written description requirement. Final Act. 2-5. The Examiner rejected claims 1-11, 13-21, 23, 24, and 27-32 under pre-AIA 35 U.S.C. § 103(a) as being unpatentable over Horvitz et al. (US 2004/0098462 Al, published May 20, 2004) (hereinafter "Horvitz") and Shimotani et al. (US 2011/0164063 Al, published July 7, 2011) (hereinafter "Shimotani"). Final Act. 6-33. The Examiner rejected claims 25 and 26 under pre-AIA 35 U.S.C. § 103(a) as being unpatentable over Horvitz, Shimotani, and Bird et al. (US 6,323,884 Bl, issued Nov. 27, 2001) (hereinafter "Bird"). Final Act. 34. ANALYSIS We conclude the Examiner did not err in finding one skilled in the art would have recognized the combination of references teaches or suggests the disputed limitations in claims 1-11, 13-21, and 23-32 (pending claims). However, we conclude the Examiner erred in concluding the pending claims fail to comply with the written description requirement. 3 Appeal2017-009953 Application 13/631,381 Section 112 Rejection Appellants argue the Examiner erred in rejecting claims 1-11, 13-21, and 23-32 under 35 U.S.C. § 112 as failing to comply with the written description requirement. The Examiner determines the following claim elements lack support: "gaze-tracking inputs received at the computing device, on-screen locations corresponding to each of the gaze-tracking inputs, content displayed at the on-screen locations, software application events related to the content, proximity inputs received at the computing device and hand positions corresponding to each of the proximity inputs[,]" and "based on the determined relevance, change the presentation of the detected on-screen visual element by relocating the on-screen visual element closer to the hand locations based on the proximity inputs and outside the area of current visual focus." Final Act. 3. The Examiner concludes that while support is found for each claimed aspect independently, "there appears to be no suggestion in the specification that provides any linking language, details suggesting that these aspects are related." Ans. 6. We agree with Appellants that paragraph 64 of the Specification teaches that an adaptive presentation module may adjust the presentation of an on-screen visual element, such as a notification, based on its perceived relevance to the user's current interaction context and without regard to the gaze-tracking data and paragraph 31 teaches that a user's current interaction context can include the position or movement of a user's hands. Reply Br. 4. In addition, paragraph 31 teaches that the system can through, for example, the input acceleration module or the adaptive presentation module (which includes the teachings of paragraph 64 ), cause an application to respond differently or more appropriately based on the inputs taught in 4 Appeal2017-009953 Application 13/631,381 paragraph 31, which includes user interaction inputs such as information, including but not limited to, information related to a user's hands. Id. We therefore agree with Appellants that paragraphs 31, 50, and 64 of the Specification support the limitations described above of claims 1-11, 13-21, and 23-32. Accordingly, we do not sustain the rejection of claims 1- 11, 13-21, and 23-32 under 35 U.S.C. § 112 as failing to comply with the written description requirement. Section 103 Rejection Appellants argue the Examiner erred in rejecting claims 1-11, 13-21, 23, 24, and 27-32 under pre-AIA 35 U.S.C. § 103(a) as being unpatentable over Horvitz and Shimotani. In particular, Appellants argue the combination of Horvitz and Shimotani does not teach based on at least some of the prior user interaction data in the contextual user model, determining a relevance of the detected on- screen visual element to at least one software application event in the area of current visual focus; and based on the determined relevance, change the presentation of the detected on-screen visual element by relocating the on-screen visual element closer to the hand locations based on the proximity inputs and outside the area of current visual focus as recited in claim 1. App. Br. 14. The Examiner finds the combination of Horvitz and Shimotani teaches these limitations. Ans. 7-10 (citing Horvitz i-fi-158, 69, 95, 100). The Examiner finds that Horvitz explicitly teaches that user activities are monitored. Ans. 8. This can include defining attention models that describe or determine the user's focus of attention or other activity. Id. In particular, paragraph 95 of Horvitz teaches considering "the novelty of the information 5 Appeal2017-009953 Application 13/631,381 to the user" and paragraph 100 teaches monitoring "'what the user is currently attending to and doing (based on, for example, contextual information, including head pose and/ or gaze as tracked by gaze tracking machinery)."' Id. at 9. This monitoring is based on user actions, which are related to the current area of focus of a user. See Horvitz Figs. 6-9. Thus, Horvitz is monitoring current user interaction data (with software) and Horvitz moves the notification closer to the user's area of focus, based upon a determined relevance to some of the user interaction data (with software) of the user. The Examiner therefore concludes, and we agree, that Horvitz teaches the claimed feature "based on at least some of the prior user interaction data in the contextual user model, determining a relevance of the detected on-screen visual element to at least one software application event in the area of current visual focus." Although Appellants argue in the Reply Brief that Horvitz does not determine a relevance of the detected on-screen visual element to at least one software application event in the area of current visual focus (Reply Br. 8), Appellants provide insufficient evidence to persuade us that the Specification or claims limit "relevance" or "software application event" in a way that, under a broad but reasonable interpretation, is not encompassed by Horvitz's teachings described above. See, e.g., Spec. i-f 38. Appellants additionally argue that because Horvitz fails to teach determining the relevance, Horvitz also fails to teach changing the presentation of the detected on-screen visual element, based on the determined relevance. Reply Br. 9. For the same reasons discussed above, we are not persuaded by this argument. 6 Appeal2017-009953 Application 13/631,381 Appellants contend that the combination of Horvitz and Shimotani is based on improper hindsight reasoning. App. Br. 20-22. We are not persuaded by this argument because, as the Examiner explains (Ans. 10-11), any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning, but as long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392 (CCPA 1971). "The obviousness analysis cannot be confined by a formalistic conception of the words teaching, suggestion, and motivation, or by overemphasis on the importance of published articles and the explicit content of issued patents." KSRint'l Co. v. Teleflex Inc., 550 U.S. 398, 419 (2007). "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results." Id. at 416. The combination of Horvitz and Shimotani teaches the claimed invention, as described in the Final Action and Answer, which we agree with and adopt as our own. The Examiner provides a sufficient reason for combining (i.e., use of a known technique: moving screen elements) as well as a reason why one would seek to combine the known technique in the manner claimed: improving the operability and user friendliness of the device by adjusting a screen element. Final Act. 1 O; Ans. 11. Appellants have not proffered sufficient evidence or argument to persuade us of error in the Examiner's proposed motivation. Moreover, we are unpersuaded an ordinarily skilled artisan would not have been motivated to apply 7 Appeal2017-009953 Application 13/631,381 Shimotani's proximity detection to another system or that such a modification would have been uniquely challenging or beyond the skill of an ordinarily skilled artisan. As to Appellants other arguments regarding combinability, we are unpersuaded because mere lawyer's arguments and conclusory statements that are unsupported by factual evidence are entitled to little probative value. In re Geisler, 116 F.3d 1465, 1470 (Fed. Cir. 1997) ("An assertion of what seems to follow from common experience is just attorney argument and not the kind of factual evidence that is required to rebut a prima facie case of obviousness."); see also In re De Blauwe, 736 F.2d 699, 705 (Fed. Cir. 1984). In the absence of sufficient evidence or line of technical reasoning to the contrary, the Examiner's findings are reasonable and we find no reversible error. Thus, we are not persuaded of error in the Examiner's rejection of claim 1under35 U.S.C. § 103(a). According, we sustain the rejection of claim 1. Because Appellants have not presented separate patentability arguments or have reiterated substantially the same arguments as those previously discussed for patentability of claim 1 above, claims 2, 4--11, 13- 21, and 23-32 fall therewith. See 37 C.F.R. § 41.37(c)(l)(iv). Dependent claim 3 We are not persuaded by Appellants' argument that Horvitz fails to teach "determin[ing] a duration of user attention to an on-screen location corresponding to a gaze-tracking input and change the presentation of the current user interface element based on the duration of user attention," as recited in dependent claim 3. App. Br. 23-24; Reply Br. 12-15. Horvitz teaches determining a duration of user attention because Horvitz tracks and 8 Appeal2017-009953 Application 13/631,381 determines a user's gaze, which is used to determine if a user is ignoring a "herald" or notification. Horvitz i-fi-156, 59, 63; Ans. 12. Thus, we are not persuaded of error in the Examiner's rejection of dependent claim 3 under 35 U.S.C. § 103(a). According, we sustain the rejection of dependent claim 3. CONCLUSION The decision of the Examiner rejecting claims 1-11, 13-21, and 23- 32 under 35 U.S.C. § 112(a) is reversed. The decision of the Examiner rejecting claims 1-11, 13-21, and 23- 32 under 35 U.S.C. § 103(a) is affirmed. DECISION The decision of the Examiner rejecting claims 1-11, 13-21, and 23- 3 2 is affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED 9 Copy with citationCopy as parenthetical citation