MICROSOFT TECHNOLOGY LICENSING, LLCDownload PDFPatent Trials and Appeals BoardFeb 21, 202014327506 - (D) (P.T.A.B. Feb. 21, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/327,506 07/09/2014 JERED AASHEIM 332743-US-CNT 6411 147148 7590 02/21/2020 Ray Quinney & Nebeker - Microsoft 36 South State Street Suite 1400 Salt Lake City, UT 84111 EXAMINER AN, MENG AI T ART UNIT PAPER NUMBER 2195 NOTIFICATION DATE DELIVERY MODE 02/21/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): mspatent@rqn.com usdocket@microsoft.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte JERED AASHEIM Appeal 2018-004186 Application 14/327,506 Technology Center 2100 ____________ Before MAHSHID D. SAADAT, CARL L. SILVERMAN, and MICHAEL J. ENGLE, Administrative Patent Judges. SAADAT, Administrative Patent Judge. DECISION ON APPEAL1 Pursuant to 35 U.S.C. § 134(a), Appellant2 appeals from the Examiner’s decision to reject claims 1–20, which are all the claims in this application. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 An oral hearing for this appeal, which was scheduled on January 9, 2020, was waived. 2 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42(a). Appellant identifies the real party in interest as Microsoft Technology Licensing, LLC. Appeal Br. 3. Appeal 2018-004186 Application 14/327,506 2 STATEMENT OF THE CASE Appellant’s disclosure is directed to a heterogeneous processing system including “a software hypervisor to autonomously control operating system thread scheduling across big and little cores without the operating system’s awareness or involvement to improve energy efficiency or meet other processing goals.” Spec. ¶ 6. Claims 1 and 5 are illustrative of the invention and read as follows: 1. A method of scheduling threads for execution on a computer exhibiting a heterogeneous architecture comprising at least two physical processing cores with different processing capabilities, the method comprising: exposing respective physical processing cores to an operating system of the computer, wherein the virtual core isolates the operating system from processing capabilities of the physical processing core; receiving a power management policy for the computer; and providing a hypervisor that schedules thread for execution by the at least two physical processing cores by: for the respective threads, choosing a selected physical processing core to execute the thread, wherein the selection of physical processing cores for the respective threads promotes the power management policy for the computer; and initiating execution by the selected physical processing cores for the respective threads. 5. The computer-implemented method of claim 1, further comprising: receiving a thread scheduling request from the operating system to run a thread on an identified virtual core; and selecting at least one physical processing core on which to execute the thread. Appeal 2018-004186 Application 14/327,506 3 REFERENCES AND REJECTIONS The prior art relied upon by the Examiner is: Name Reference Date Ahuja US 2009/0037911 Al Feb. 05, 2009 Bernstein US 2009/0055826 Al Feb. 26, 2009 Hum US 2009/0222654 Al Sept. 03, 2009 Claims 1–18 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Bernstein and Ahuja. Non-final Act. 3–9. Claims 19 and 20 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Bernstein, Ahuja, and Hum. Non-final Act. 9–11. ANALYSIS Claim 1 In rejecting claim 1, the Examiner relies on Bernstein as disclosing the recited method of scheduling threads for execution on a computer including “receiving a power management policy” and “providing a hypervisor that schedules thread for execution” by “choosing a selected physical processing core” and “initiating execution by the selected physical processing cores.” Non-Final Act. 3–5 (citing Bernstein ¶¶ 16, 23, 24, 26). The Examiner further relies on Ahuja’s disclosure as teaching the step of “exposing respective physical processing cores to an operating system of the computer, wherein the virtual core isolates the operating system from processing capabilities of the physical processing core.” Non-Final Act. 5–6 (citing Ahuja ¶¶ 21, 39–41). The Examiner concludes that it would have been obvious to one of ordinary skill in the art to combine the teachings of Appeal 2018-004186 Application 14/327,506 4 Ahuja with Bernstein “for assigning tasks to processors in heterogeneous multiprocessors via a hypervisor.” Non-Final Act. 6. First Argument Appellant contends the proposed combination does not teach or suggest “exposing respective physical processing cores of the at least one processor to an operating system of the computer [as a virtual processing core], wherein the virtual processing core isolates the operating system from processing capabilities of the physical processing core,” as recited in independent claims 1, 8, and 15. Appeal Br. 26. In particular, Appellant argues Bernstein’s approach to isolating the operating system from processing capabilities of the physical processing core is a rigid one, while Ahuja allows “the operating system and the applications executed therein to take full advantage of the processing capabilities of the processor.” See Appeal Br. 27–29 (citing Ahuja ¶¶ 6, 9, 43, 45). According to Appellant, “the ‘virtualization’ of processors in Ahuja is not provided to isolate the operating system from the processing capabilities of the physical processors,” and instead, “the virtualization is merely to aggregate the entire set of processing capabilities that are provided by the collection of physical processors assigned to the partition.” Appeal Br. 29. The Examiner responds by identifying the cited portions of Ahuja as disclosing the disputed claim limitation. Ans. 6. The Examiner specifically explains: Ahuja is relied on to teach “exposing respective physical processing cores of the at least one processor to an operating system of the computer as a virtual processing core, wherein the virtual processing core isolates the operating system from processing capabilities of the physical processing core.” This is clearly shown in Figure 2 of Ahuja, where the physical Appeal 2018-004186 Application 14/327,506 5 processors (PP) are exposed to the operating system (OS) via the hypervisor by means of the virtual processors (VP) [Ahuja ¶0042–45]. Therefore, the OS only see the VP and does not see the processing capabilities of the physical processors (PP). Id. We agree with the Examiner’s findings that Ahuja’s physical processors (PP) connect to the operating systems in a logical partition (LPAR) through a set of virtual processors (VP). See Ahuja ¶¶ 39–40. As such, the virtual core in the virtual processor isolates the operating system in the LPAR from the processing capabilities of the physical core in the physical processor. Contrary to Appellant’s argument that Ahuja’s virtualization is to “aggregate the entire set of processing capabilities,” which Appellant characterizes as “the opposite of isolating the operating system from processing capabilities of the physical processing core,” the operating system of LPARs in Ahuja accesses the physical processors through the virtual processors. Therefore, whether the virtual processors are aggregated in a pool (virtual processors (VP) 244, 248, 252, 256, 260, and 264) or not (virtual processors (VP) 236 and 240), the hypervisor manages the access to the physical processor cores, and thus isolates the operating system from the physical processors. See Ahuja ¶¶ 24, 39–40. Second Argument Appellant further contends the Examiner erred in combining the references because “Bernstein and Ahuja are technically incompatible, and therefore cannot be combined by a person of ordinary skill in the art.” Appeal Br. 31. Appellant argues Bernstein “involves isolating the thread scheduling task from the operating [system]” such that “[t]he cores are not presented to the operating system at all, in physical or virtual form; rather, the entire thread scheduling task is delegated to the hypervisor.” Id. Appeal 2018-004186 Application 14/327,506 6 According to Appellant, “Ahuja explicitly involves assigning processors to operating systems (within various partitions) for the purpose of enabling the operating systems to take advantage of the features that are supported by the processors.” Id. In response, with respect to the combination of the references, the Examiner explains that Bernstein is concerned with “operating a scheduler coupled to an operating system and to the multi core processor, where the scheduler is operated to be responsive at least in part to information read from the memory to schedule the execution of threads to individual ones of the processor cores.” Ans. 6 (citing Bernstein Abstract). The Examiner further points to the references’ teachings as follows: Bernstein continues that a supervisor (i.e. scheduler) may be implemented in a hypervisor, where the hypervisor may be considered as a virtualization layer designed to isolate the OS [Bernstein Fig 1; ¶0023–0024]. Ahuja teaches a method of assigning tasks to processors [Ahuja Abstract]. Ahuja teaches a hypervisor may administer the assignment of physical resources such as memory and processing resources to LPARs. In addition, the hypervisor may schedule virtual processors on physical processors and may administer the assignment of virtual processors to LPARs [Ahuja Fig 2; ¶0024]. Comparing Fig 1 of Bernstein with Fig 2 of Ahuja show similarity, Bernstein’s OS Instructions Compiler to Ahuja’s LPAR OS; Bernstein’s Supervisor to Ahuja’s Hypervisor; and Bernstein’s Multicore Processor to Ahuja’s physical processors (PP). Ans. 6–7. The Examiner also relies on Ahuja as disclosing a “method of the assignment of heterogeneous processor by the hypervisor (i.e. the supervisor) via virtual processors to the guest OS of the LPAR.” Ans. 7. Appellant’s argument is unpersuasive because the Examiner relies on the combination of the references as teaching or suggesting the subject Appeal 2018-004186 Application 14/327,506 7 matter of the claims. Final Act. 6; Ans. 7. In re Mouttet, 686 F.3d 1322, 1332 (Fed. Cir. 2012) (citing In re Keller, 642 F.2d 413, 425 (CCPA 1981)) (“[T]he test is what the combined teachings of the references would have suggested to those of ordinary skill in the art.”). We agree with the Examiner’s reasoning that Bernstein discloses assigning heterogeneous processors having different power levels by a supervisor or hypervisor, and that it is reasonable to rely on Ahuja for the particular capabilities set forth in the claims at issue, such as assigning processors by the hypervisor, via virtual processors, to the guest operating systems of the LPAR. Additionally, we are unpersuaded the Examiner has used improper hindsight or has incorrectly combined the references rather than considering the claimed subject matter as a whole. Appeal Br. 31. Reliance on multiple references in a rejection does not, without more, weigh against the obviousness of the claimed invention. In re Gorman, 933 F.2d 982 (Fed. Cir. 1991). Further, any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. However, so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from Appellant’s disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 1395 (CCPA 1971). Here, the Examiner identifies the relevant portions of each reference including the finding that Bernstein relates to a similar role of a scheduler in connection with the operating system and a multi-core processor, where the scheduler is implemented in a hypervisor as an isolated virtualization layer. Ans. 6 (citing Bernstein Fig. 1, ¶¶ 23–24). Appeal 2018-004186 Application 14/327,506 8 Moreover, Appellant does not point to any evidence of record that the resulting combination would have been “uniquely challenging or difficult for one of ordinary skill in the art” or “represented an unobvious step over the prior art.” Leapfrog Enters., Inc. v. Fisher-Price, Inc., 485 F.3d 1157, 1162 (Fed. Cir. 2007) (citing KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 418–19 (2007)). The Examiner’s findings are reasonable because the ordinarily- skilled artisan would have been “able to fit the teachings of multiple patents together like pieces of a puzzle” because the skilled artisan is “a person of ordinary creativity, not an automaton.” KSR, 550 U.S. at 420–21. Claim 5 Appellant contends the cited portion of Bernstein in paragraph 26 does not mention the operating system making a thread scheduling request and merely identifies threads or designates them as “primary.” Appeal Br. 33. According to Appellant, the focus of the reference is “isolating the thread scheduling task within the virtualization layer of the hypervisor, and retaining a hermetic runtime environment for the operating system.” Appeal Br. 34. We are not persuaded by Appellant’s arguments that the Examiner erred. As found by the Examiner, selecting and assigning threads to a specific virtual core and later selecting a physical core to execute that thread is taught in Bernstein’s paragraph 26, which describes scheduling threads by the supervisor for execution on processor cores, and in Ahuja’s Figure 2, which depicts a hypervisor assigning threads via the virtual processors. Final Act. 7–8; Ans. 7–8. We also find the Examiner’s mapping the claimed “thread scheduling request from the operating system” to Bernstein’s “received information identifying the primary threads from the OS,” which Appeal 2018-004186 Application 14/327,506 9 results in setting the thread to be executed on the processor core, is reasonable. See Ans. 8. In other words, claim 5 only requires a request from the operating system to run a thread, and not identifying the virtual core that is running the requested thread. Therefore, Appellant’s argument that Bernstein’s “hermetic isolation” of the operating system is the opposite of the claimed feature (Reply Br. 13–14) is unpersuasive of Examiner error because the claim does not preclude receiving the thread scheduling request from the operating system by the hypervisor, which in turn, identifies the processor core on which to execute the thread. Remaining Claims The Examiner has provided a detailed response, supported by sufficient evidence based on the teachings of the cited prior art, to each of the contentions raised by Appellant. We adopt as our own (1) the findings and reasons set forth by the Examiner in the action from which this appeal is taken and (2) the reasons set forth by the Examiner in the Examiner’s Answer in response to Appellant’s contentions (see Ans. 14–26). CONCLUSION As discussed herein, Appellant’s arguments have not persuaded us that the Examiner erred in finding the combination of Bernstein and Ahuja, or in further combination with Hum teaches or suggests the disputed claim limitations. Appellant argues the patentability of the remaining claims based on arguments similar to those presented for claims 1 and 5. See Appeal Br. 5, 14. Therefore, we sustain the 35 U.S.C. § 103(a) rejections of claims 1– 20. Appeal 2018-004186 Application 14/327,506 10 DECISION SUMMARY In summary: Claims Rejected 35 U.S.C. § Basis Affirmed Reversed 1–18 103(a) Bernstein, Ahuja 1–18 19, 20 103(a) Bernstein, Ahuja, Hum 19, 20 Overall Outcome 1–20 AFFIRMED Copy with citationCopy as parenthetical citation