Ex Parte MathanDownload PDFPatent Trial and Appeal BoardMar 21, 201311463949 (P.T.A.B. Mar. 21, 2013) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE __________ BEFORE THE PATENT TRIAL AND APPEAL BOARD __________ Ex parte SANTOSH MATHAN __________ Appeal 2011-007274 Application 11/463,949 Technology Center 3700 __________ Before MELANIE L. McCOLLUM, JEFFREY N. FREDMAN, and STEPHEN WALSH, Administrative Patent Judges. WALSH, Administrative Patent Judge. DECISION ON APPEAL This is an appeal under 35 U.S.C. § 134(a) from the rejection of claims directed to an analysis system, a computer program product, and a method of performing human factor analysis. The Patent Examiner rejected the claims for obviousness. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. Appeal 2011-007274 Application 11/463,949 2 STATEMENT OF THE CASE The Specification states: “as the history of technology driven work transformation demonstrates, new technologies can often impact human performance negatively. In order to exploit the benefits of new tools, it is vital that potential usability problems be identified and remedied well ahead of deployment.” (Spec. 1, [0002].) The Specification further states: “Research indicates that empirical observation of users performing tasks is one of the most effective ways to identify potential Human Systems Integration (HSI) problems. Human Systems Integration (HSI) refers to the integration of human considerations into the design and support of new technology.” (Id. at [0003].) Claims 1-8, 10, 12-17, and 20 are on appeal. Claim 1 illustrates the subject matter on appeal and reads as follows: 1. An analysis system, comprising: at least one behavioral sensor configured to obtain continuous user interaction time series data documenting an interaction between a user and a user testing system; at least one state sensor configured to obtain user impact data generated during the interaction; and at least one processing unit configured to receive the continuous user interaction time series data from the at least one behavioral sensor, receive the user impact data from the at least one state sensor, and mark the continuous user interaction time series data by the user impact data, wherein the at least one processing unit uses a plurality of linear and nonlinear pattern recognition techniques as classifiers to classify segments of the user interaction data, wherein further each classifier of the plurality votes on the classification of the segments and a majority vote determines the classification for each segment of the user interaction data. Appeal 2011-007274 Application 11/463,949 3 The Examiner rejected claims 1-8, 10, 12-17, and 20 under 35 U.S.C. § 103(a) as unpatentable over Abbott1 and Tan.2 Appellant focuses argument on claim 1 and does not argue the claims separately. We select claim 1 as representative; claims 2-8, 10, 12-17, and 20 will stand or fall with claim 1. 37 C.F.R. § 41.37(c)(1)(vii). OBVIOUSNESS The Issue The Examiner’s position is that Abbott described a system that monitors, records, and analyzes a plurality of behavioral, environmental, and physiological inputs of a user. (Ans. 3-4.) The Examiner found that Abbott aggregated low-level sensor signals into higher level variables to classify the condition of the user, using IF-THEN statements rather than linear and non- linear pattern recognition techniques to classify the data, and that Abbott was also silent about further classification based on a majority vote. (Id. at 4.) The Examiner found however that Tan disclosed classifying user interaction data into different brain states, and using linear and non-linear data analysis pattern recognition. (Id.) The Examiner concluded: it would have been obvious to one of ordinary skill in the art at the time of the invention to substitute the use of IF-THEN statements to classify user condition as disclosed in Abbott with the pattern recognition techniques disclosed in Tan, because Tan teaches the use of such techniques as a sufficient method of determining cognitive workload that improves on standard 1 Kenneth H. Abbott III, et al., US 2003/0154476 A1, published Aug. 14, 2003. 2 Desney S. Tan et al., US 2007/0185697 A1, filed Feb. 7, 2006, published Aug. 9, 2007. Appeal 2011-007274 Application 11/463,949 4 techniques of classification by taking into consideration the effects of “cognitive artifacts” to more accurately classify interaction data (Tan paragraphs [0033] and [0058]). (Id. at 4-5.) Appellant contends the “Examiner erred in abridging certain claim elements as non-structural” (App. Br. 9-11), and that “Tan fails to describe the subject matter ascribed to Tan by the Examiner” (id. at 12-13). More specifically, Appellant contends that a. Tan “fails to describe at least one processing unit uses a plurality of linear and non-linear pattern recognition techniques as classifiers to classify segments of the user interaction data” (id. at 12-13); b. the combination of Abbott and Tan “fails to describe ‘a behavioral sensor configured to obtain continuous user interaction time series data documenting an interaction between a user and a user testing system’” (id. at 13-15); and c. the combination of Abbott and Tan “fails to describe ‘at least one processing unit configured to . . . mark the continuous user interaction time series data by the user impact data’” (id. at 15-16). Findings of Fact 1. State sensors can be cognitive and/or physical state sensors. (Spec. 5, [0023].) 2. Examples of state sensors include electroencephalograph (EEG) sensor, an electrocardiogram (ECG) sensor, an eye motion detector, a head tracker, near infrared imaging sensor, and a dead reckoning module that tracks a user's physical position using a combination of a Appeal 2011-007274 Application 11/463,949 5 gyroscope and global positioning system (GPS) sensors. (Id. at 6, [0024].) 3. The data collected by state sensors is referred to as “user impact data.” (Id.) 4. Behavioral sensors collect data regarding a user’s interaction with a test system, such as audio recordings, video screen captures, and point of view video streams, to provide an analyst with information about the test system state and the underlying task content. (Id. at 5, [0023].) 5. The data collected by behavioral sensors 102 is referred to as “user interaction data.” (Id. at 6, [0023].) User interaction data refers to data regarding overt physical actions of a user, such as keystrokes, mouse clicks, verbal utterances, etc. (Id.) 6. Behavioral sensors and state sensors communicate with a processing unit. (Id. at 6, [0026].) 7. The processing unit “classifies the user impact data and indexes the user interaction data using the classified user impact data. . . . Exemplary signal processing components and machine learning techniques used to index the user interaction data using the user impact data include, but are not limited to, statistical analysis, thresholds, and classifiers.” (Id. at 8, [0029].) 8. The processing unit may use classifiers to classify the user impact data as representative of high or low demands on attention levels, working memory, scanning effort, and communication load. (Id. at 8, [0030].) Appeal 2011-007274 Application 11/463,949 6 9. Classifiers used in the processing unit are feature selection algorithms used to identify specific features of the user impact data. (Id. at 8, [0031].) 10. Appellant points to paragraphs [0029-0032] of the Appellant’s specification that discloses that exemplary signal processing components and machine learning techniques are used to index (i.e. mark) the user interaction data using the user impact data. Non-limiting examples include statistical analysis, thresholds, and classifiers of the user impact data. (Reply Br. 4.) 11. Appellant states that “FIG. 6 expressly illustrates an example of marking exemplary user interaction time series data.” (Id.) 12. Appellant’s Figure 6 is reproduced here: “Figure 6 is an exemplary chart depicting differences in scanning pattern behavior for two levels of workload.” (Spec. 3, [0015].) “Figure 6 provides an example of differences in scanning pattern behavior in two levels of workload detected by a head tracker such as head tracker 309. As can be seen in Fig. 6, high workload periods are characterized by a lower proportion of detected visual scanning than low workload periods.” (Spec. 13, [0046].) Appeal 2011-007274 Application 11/463,949 7 13. Abbott described a computer-based system for storing information about a user’s state, acquired via sensors: For example, a variety of sensors can provide information about the current physiological state of the user, geographical and spatial information (e.g., location and altitude), and current user activities. Some devices, such as a microphone, can provide multiple types of information. For example, if a microphone is available, the microphone can provide sensed information related to the user (e.g., detecting that the user is talking, snoring, or typing) when not actively being used for user input. Other user-worn body sensors can provide a variety of types of information, including that from thermometers, sphygmo[mano]meters, heart rate sensors, shiver response sensors, skin galvanometry sensors, eyelid blink sensors, pupil dilation detection sensors, EEG and EKG sensors, sensors to detect brow furrowing, blood sugar monitors, etc. In addition, sensors elsewhere in the near environment can provide information about the user, such as motion detector sensors (e.g., whether the user is present and is moving), badge readers, video cameras (including low light, infra-red, and x-ray), remote microphones, etc. These sensors can be both passive (i.e., detecting information generated external to the sensor, such as a heart beat) or active (i.e., generating a signal to obtain information, such as sonar or x-rays). (Abbott 10, [0075].) 14. Abbott disclosed: when information about a current event of interest is to be stored, an embodiment of the Computer-Augmented Memory (CAM) system acquires a variety of current state information of different types (e.g., video, audio, and textual information) about the environment, about a user of the CAM system, and about the CAM system itself via internal and external sensors and other input devices. The CAM system then associates the variety of state information together as a group, and stores the Appeal 2011-007274 Application 11/463,949 8 group of state information for later retrieval. In addition to the current state information, the CAM system can also associate other information with the group, such as one or more recall tags that facilitate later retrieval of the group, or one or more annotations to provide contextual information when the other state information is later retrieved and presented to the user. (Id. at 2, [0026].) 15. Abbott disclosed: A model of the current conditions can include a variety of condition variables that represent information about the user and the user's environment at varying levels of abstraction. For example, information about the user at a low level of abstraction can include raw physiological data (e.g., heart rate and EKG) and geographic information (e.g., location and speed), while higher levels of abstraction may attempt to characterize or predict the user's physical activity (e.g., jogging or talking on a phone), emotional state (e.g., angry or puzzled), desired output behavior for different types of information (e.g., to present private family information so that it is perceivable only to myself and my family members), and cognitive load (i.e., the amount of attention required for the user's current activities). (Id. at 4, [0041].) 16. Abbott classified its data using IF-THEN rules, but stated that processing could be done in a variety of ways. (Id. at 16, [0116].) 17. Tan described a method entitled “Using Electroencephalograph Signals For Task Classification And Activity Recognition.” Exemplary uses of the method include, but are not limited to: comparing the cognitive workload levels or workload types of a plurality of user interfaces; evaluating cognitive utility of user interfaces in real time to enable dynamically adapting user interfaces to users' states; and Appeal 2011-007274 Application 11/463,949 9 presenting an enhanced, detailed awareness of the cognitive workload of groups of users. (Tan 1, [0008].) 18. Tan disclosed: “A user’s performance can be evaluated by comparing how hard a user works on one task compared to another task. A user interface’s performance can be evaluated by comparing how hard users work on one user interface versus another user interface.” (Id. at 3, [0031].) 19. Tan disclosed: “Pattern recognition is used to discover the pattern in the artifacts in the EEG signal that indicate the brain state the user is in while performing the task.” (Id. at 3, [0032].) 20. Tan disclosed: Exemplary brain states are brain states that may exist during tasks such as high and low workload, math calculations, 3D image rotation, playing a computer game, etc. For example, three brain states are determined. Each of the three brain states is associated with three mental tasks: rest, math calculation, and image rotation. A user is asked to rest, i.e., sit still without thinking about anything in particular. The user is then asked calculate the product of a pair of numbers, such as 34x257. The user is then asked to imagine some object, such as a peacock, rotated in 3D space. At block 155, a labeled set of the EEG data is collected. As the user performs the three tasks, the signals from a sensor array 120 are recorded by an EEG 110 and sent to a computing device 100. The unprocessed, i.e., raw, EEG data is “labeled” as having been collected during one of the three tasks and hence, may represent a brain state associated with a task. (Id. at 4, [0037].) Appeal 2011-007274 Application 11/463,949 10 Principles of Law “[O]bviousness requires a suggestion of all limitations in a claim.” CFMT, Inc. v. Yieldup Intern. Corp., 349 F.3d 1333, 1342 (Fed. Cir. 2003) (citing In re Royka, 490 F.2d 981, 985 (CCPA 1974)). “Absent claim language carrying a narrow meaning, the PTO should only limit the claim based on the specification or prosecution history when those sources expressly disclaim the broader definition.” In re Bigio, 381 F.3d 1320, 1325 (Fed. Cir. 2004). Analysis Upon consideration of the evidence on this record, and each of Appellant’s contentions, we find that the preponderance of evidence on this record supports the Examiner’s conclusion that the subject matter of Appellant’s claims is unpatentable. Accordingly, we sustain the Examiner’s rejections for the reasons set forth in the Answer, which we incorporate herein by reference, including the Examiner’s responses to Appellant’s arguments. We add the following for emphasis. a. Appellant argues that Tan “fails to describe at least one processing unit uses a plurality of linear and non-linear pattern recognition techniques as classifiers to classify segments of the user interaction data.” (App. Br. 12-13). We agree. Tan analyzed EEG signals, and it is undisputed that EEG signals are user impact data in Appellant’s terminology because it is data collected by a state sensor. However, the rejection is based on the obviousness of using Tan’s pattern recognition techniques to analyze Abbott’s user condition data. As user condition data, Abbott collected user interaction data from behavioral sensors and user impact data from state Appeal 2011-007274 Application 11/463,949 11 sensors. (FF 13, FF 15.) Because the rejection concluded that it would have been obvious to apply Tan’s pattern recognition to analyze all of Abbott’s user condition data, “[each reference] must be read, not in isolation, but for what it fairly teaches in combination with the prior art as a whole.” In re Merck & Co., Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986) (citation omitted). The Examiner found, and Appellant does not dispute, that Tan’s method “improves on standard techniques.” (Ans. 5.) Appellant’s claim is open to using pattern recognition to analyze both kinds of data, and we conclude the rejection set out a prima facie case of obviousness for doing so. b. Appellant argues that the combination of Abbott and Tan “fails to describe ‘a behavioral sensor configured to obtain continuous user interaction time series data documenting an interaction between a user and a user testing system.’” (App. Br. at 13-15.) The Examiner found this argument unpersuasive because Abbott uses the same equipment Appellant uses to collect continuous user interaction time series data documenting interaction between a user and a testing system, e.g. “video camera, still camera, audio recorder, and a keystroke logger.” (Ans. 6-8.) The evidence supports the Examiner’s finding. (See FF 13.) We find this argument unpersuasive for the same reason the Examiner found it unpersuasive. c. Appellant argues that the combination of Abbott and Tan “fails to describe ‘at least one processing unit configured to . . . mark the continuous user interaction time series data by the user impact data.’” (App. Br. 15-16). As the Examiner noted, the marking feature is not expressly described in the Specification. Although Appellant contends that Figure 6 “expressly illustrates an example of marking” (Reply Br. 4), we cannot find evidence the illustration is “express.” (See FF 12.) Figure 6 is only a visual Appeal 2011-007274 Application 11/463,949 12 depiction of data association. Tan taught its data is “labeled” to associate the EEG data with the user’s work state. (FF 20.) Appellant has not persuaded us that there is a material difference between Abbott’s aggregating or associating the data for later recall (FF 14), Tan’s labeling (FF 20), and Appellant’s marking or indexing. (See Ans. 8-9.) SUMMARY We affirm the rejection of claims 1-8, 10, 12-17, and 20 under 35 U.S.C. § 103(a) as unpatentable over Abbott and Tan. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). AFFIRMED lp Copy with citationCopy as parenthetical citation