Ex Parte Baliga et alDownload PDFPatent Trial and Appeal BoardNov 21, 201412114641 (P.T.A.B. Nov. 21, 2014) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 12/114,641 05/02/2008 Priya Baliga RSW920080111US1 3504 58139 7590 11/24/2014 IBM CORP. (WSM) c/o WINSTEAD P.C. P.O. BOX 131851 DALLAS, TX 75313 EXAMINER ELAHI, TOWFIQ ART UNIT PAPER NUMBER 2625 MAIL DATE DELIVERY MODE 11/24/2014 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte PRIYA BALIGA, LYDIA MAI DO, MARY P. KUSKO, and FANG LU ____________ Appeal 2012-005397 Application 12/114,641 Technology Center 2600 ____________ Before CARLA M. KRIVAK, CARL W. WHITEHEAD JR., and MICHELLE N. WORMMEESTER, Administrative Patent Judges. WORMMEESTER, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from the Examiner’s final rejection of claims 1–4, 6–11, 13–18, and 20, which constitute all the claims pending in this application. Claims 5, 12, and 19 have been cancelled. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. Appeal 2012-005397 Application 12/114,641 2 STATEMENT OF THE CASE Introduction Appellants’ invention relates to computer screen security. (See Specification 1:4–6.) Exemplary independent claim 1 reads as follows: 1. A method for enhancing computer screen security, the method comprising: tracking a location of a gaze of a user on a screen; distorting locations on said screen other than said location of said gaze of said user; displaying information in a content area at said location of said gaze of said user; receiving input from said user to tune said content area to display information; and reconfiguring said content area to display information in response to input received from said user; wherein said input is received from said user via one or more of the following methods: audio, touch, key sequences and gestures. Applied Prior Art The Examiner relies on the following prior art in rejecting the claims on appeal: Forman US 6,603,485 B2 Aug. 5, 2003 Paris US 2005/0086515 A1 Apr. 21, 2005 Chaiken US 2006/0123240 A1 June 8, 2006 Rejections Claims 1–4, 8–11, and 15–18 stand rejected under 35 U.S.C. § 102(b) as being anticipated by Forman. (See Ans. 4–10.) Claims 6, 13, and 20 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Forman and Paris. (See Ans. 10–11.) Appeal 2012-005397 Application 12/114,641 3 Claims 7 and 14 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Forman, Paris, and Chaiken. (See Ans. 11–12.) ANALYSIS 35 U.S.C. § 102 Rejection of Claims 1–4, 8–11, and 15–18 Appellants argue the Examiner erred in rejecting independent claim 1 because Forman fails to disclose receiving an input from a user to tune a content area for displaying information, where the input is received via audio, touch, key sequences, or gestures. (See App. Br. 4–6; Reply Br. 2–5.) Independent claims 8 and 15 recite similar features. We disagree with Appellants. Forman discusses a security system, where an entire screen is masked except for a cursor area. (See Forman col. 2, l. 66–col. 3, l. 5; see also Ans. 16.) The cursor area is a display area surrounding the cursor and represents the user’s region of interest. (See Forman col. 3, ll. 2–5, Fig. 2; see also Ans. 16.) Forman teaches that eye-tracking cursors may be used for cursor control and that eye-tracking cursor implementations are advantageous when moving the cursor area between non-contiguous regions of the screen. (See Forman col. 3, ll. 15–32; see also Ans. 6, 13–14.) The Examiner finds Forman’s cursor area corresponds to Appellants’ claimed content area (see Ans. 16), and Forman’s eye-tracking cursor implementations correspond to the claimed user input via gestures (see Ans. 5–6, 14). We agree with the Examiner’s findings. Appellants contend the cited portions of Forman focus on cursor control rather than tuning the content area. (See App. Br. 5; Reply Br. 2.) Appellants acknowledge these cited portions discuss moving the cursor area, Appeal 2012-005397 Application 12/114,641 4 but Appellants submit that moving is not tuning. (See App. Br. 5; Reply Br. 2.) We find Appellants’ argument unpersuasive. The term “tune” means “adapt; adjust.” THE AMERICAN HERITAGE DICTIONARY 1303 (2d College ed. 1982). This meaning is consistent with examples of tuning in Appellants’ specification. (See Specification 5:9–10 (changing size of content area), 18:7–8 (distorting entire screen), 19:4–6 (blurring area of screen).) Appellants do not identify any language in the specification or in claim 1 that provides further guidance on the meaning of “tuning.” Accordingly, we are unpersuaded by Appellants’ argument that tuning a content area does not encompass moving (or adjusting) Forman’s cursor area (by changing its position). (See also Ans. 5–6, 14, 16.) Appellants also acknowledge Forman teaches the user, using a mouse, may select the cursor area size from a list of sizes, but contend the input for selecting the cursor area size is not received via audio, touch, key sequences, or gestures. (See App. Br. 5; Reply Br. 2, 4; Forman col. 4, ll. 11–14.) As the Examiner finds, however, the claimed touch feature reads on Forman’s mouse control. (See Ans. 13–14.) Accordingly, we find Appellants’ argument unpersuasive. We nonetheless consider Appellants’ contentions unpersuasive based on Forman’s use of eye-tracking cursors to move the cursor area, as discussed above. (See Forman col. 3, ll. 29–32.) Moving the cursor area constitutes reconfiguring the cursor area by changing or rearranging its position; and, as the Examiner finds, eye-tracking cursor implementations are a form of user input via gestures. (See Ans. 5–6, 14.) Appellants further contend Forman does not disclose reconfiguring the cursor area in response to the user’s input received via audio, touch, key Appeal 2012-005397 Application 12/114,641 5 sequences, or gestures. (See Reply Br. 4.) Appellants raise this argument for the first time in the Reply Brief, and the argument is not in response to a new issue brought up by the Examiner in the Answer. We therefore find this new argument waived and need not consider the argument because it is not timely filed. See Ex parte Borden, 93 USPQ2d 1473, 1474 (BPAI 2010) (informative). In view of the foregoing, we are unpersuaded of error in the Examiner’s findings as to independent claim 1. Appellants do not argue separately any features recited in dependent claims 2–4, 9–11, and 16–18. Accordingly, we sustain the Examiner’s § 102 rejection of claims 1–4, 8–11, and 15–18. 35 U.S.C. § 103 Rejection 1. Claims 6, 13, and 20 Appellants argue the Examiner erred in rejecting claim 6 because Forman and Paris do not teach the recited steps “detecting a second user gazing on said screen within a proximate range; and enacting a pre- configured action based on location of gaze on said screen of said second user and proximity of said second user to said screen.” (See App. Br. 7–9; Reply Br. 5–8.) Claims 13 and 20 recite similar features. We disagree with Appellants. As discussed above, the Examiner finds Forman teaches using eye- tracking cursors (detecting a user gazing on a screen) for cursor control when moving the cursor area (enacting a pre-configured action based on the location of the gaze) on the screen. (See Ans. 6, 13–14; Forman col. 3, ll. 15–32.) The Examiner further finds Paris teaches altering a monitor screen (enacting a pre-configured action based on user proximity) upon detecting a Appeal 2012-005397 Application 12/114,641 6 second user’s presence in the vicinity of the monitor (detecting a second user within a proximate range). (See Ans. 11, 21; Paris Fig. 1, ¶ 11.) The Examiner concludes the combined teachings of Forman and Paris suggest modifying Forman’s system to protect the screen from a second user’s view by altering the screen upon detecting the second user’s gaze, as taught by Forman, and the second user’s presence (or proximity), as taught by Paris. (See Ans. 11.) We agree with the Examiner’s conclusion. Appellants contend Paris does not teach detecting a second user’s gaze on the screen or enacting a pre-configured action based on the location of the gaze. (See App. Br. 8–9; Reply Br. 6–8.) As discussed above, however, the Examiner relies on the combined teachings of Forman and Paris in rejecting claim 6. (See Ans. 11.) The Examiner finds Forman teaches detecting a user’s gaze and enacting a pre-configured action based on the gaze, while Paris teaches detecting a second user’s proximity and enacting a pre-configured action based on the proximity. (See Ans. 6, 11, 13–14.) Appellants do not explain why the combination of these teachings in Forman and Paris fails to teach or suggest the argued feature. Even without Paris, it would not be beyond the scope of an ordinarily skilled artisan to track a gaze of a second user as taught by Forman as the claims do not require the tracking be simultaneous. Accordingly, we find Appellants’ argument unpersuasive and sustain the Examiner’s § 103 rejection of claim 6 and claims 13 and 20, which recite the similar features. 2. Claims 7 and 14 Appellants argue the Examiner erred in rejecting claim 7 because Forman and Chaiken do not teach the recited steps “authenticating said user via one or more biometric technologies; and enabling eye tracking and Appeal 2012-005397 Application 12/114,641 7 display functionality if said user is authorized.” (See App. Br. 9–12; Reply Br. 8–11.) Claim 14 recites similar features. We disagree with Appellants. In rejecting claim 7, the Examiner relies on the combined teachings in Forman and Chaiken. (See Ans. 12, 27–28.) In particular, the Examiner cites Forman for teaching enabling eye tracking and display functionality; and Chaiken for teaching authenticating a user via one or more biometric technologies. (See Ans. 12, 27–28.) Although Appellants acknowledge these separate teachings in Forman and Chaiken (see App. Br. 10–12; Reply Br. 9–10), Appellants contend “[t]here is no language in the cited passages of Forman and Chaiken that teaches enabling eye tracking and display functionality if the user is authorized (where the user is authenticated via one or more biometric technologies) (see App. Br. 11–12; Reply Br. 10). As discussed above, however, the Examiner relies on the combined teachings in Forman and Chaiken. According to the Examiner, these combined teachings suggest modifying Forman’s method to include Chaiken’s authentication process in order to provide additional screen security. (See Ans. 12.) We agree with the Examiner’s findings and find Appellants’ argument unpersuasive. Accordingly, we sustain the Examiner’s § 103 rejection of claims 7 and 14. DECISION The Examiner’s decision rejecting claims 1–4, 8–11, and 15–18 under 35 U.S.C. § 102(b) is affirmed. The Examiner’s decision rejecting claims 6, 7, 13, 14, and 20 under 35 U.S.C. § 103(a) is affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). Appeal 2012-005397 Application 12/114,641 8 AFFIRMED msc Copy with citationCopy as parenthetical citation