Alteryx, Inc.Download PDFPatent Trials and Appeals BoardOct 6, 202015595880 - (D) (P.T.A.B. Oct. 6, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/595,880 05/15/2017 Edward P. Harding JR. 32902-37018/US 1967 758 7590 10/06/2020 FENWICK & WEST LLP SILICON VALLEY CENTER 801 CALIFORNIA STREET MOUNTAIN VIEW, CA 94041 EXAMINER GURSKI, AMANDA KAREN ART UNIT PAPER NUMBER 3623 NOTIFICATION DATE DELIVERY MODE 10/06/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): PTOC@Fenwick.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte EDWARD P. HARDING JR., ADAM D. RILEY, CHRISTOPHER H. KINGSLEY, and SCOTT WIESNER __________________ Appeal 2020-003938 Application 15/595,880 Technology Center 3600 ____________________ Before MICHAEL C. ASTORINO, JAMES P. CALVE, and CYNTHIA L. MURPHY, Administrative Patent Judges. CALVE, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the decision of the Examiner to reject claims 1–22, which are all the pending claims. Appeal Br. 5. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 “Appellant” refers to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies Alteryx, Inc. as the real party in interest. Appeal Br. 2. Appeal 2020-003938 Application 15/595,880 2 CLAIMED SUBJECT MATTER Claims 1, 11, and 16 are independent. Claim 1 is reproduced below. 1. A method performed by a data processing apparatus comprising: retrieving a data stream comprising a plurality of data records; aggregating the plurality of data records of the data stream to form a plurality of record packets, each of the plurality of record packets having a predetermined size capacity determined based on a memory size of a cache memory associated with the data processing apparatus; and transferring respective ones of the plurality of record packets to respective ones of a plurality of threads associated with one or more processing operations of the data processing apparatus. Appeal Br. 10 (Claims App.). REJECTIONS Claims 1–3, 5, 7–9, 11–13, and 16–18 are rejected under 35 U.S.C. § 103 as unpatentable over McCaffrey (US 2013/0339473 A1, pub. Dec. 19, 2013) and Greiner (US 2004/0236785 A1, pub. Nov. 25, 2004). Claims 4, 14, 19, 21, and 22 are rejected under 35 U.S.C. § 103 as unpatentable over McCaffrey, Greiner, and Stevens (US 2014/0297652 A1, pub. Oct. 2, 2014). Claims 6, 15, and 20 are rejected under 35 U.S.C. § 103 as unpatentable over McCaffrey, Greiner, and Heath (US 6,564,274 B1, iss. May 13, 2003). Claim 10 is rejected under 35 U.S.C. § 103 as unpatentable over McCaffrey, Greiner, and Stephens (US 2009/0144304 A1, pub. June 4, 2009). Appeal 2020-003938 Application 15/595,880 3 ANALYSIS Claims 1–3, 5, 7–9, 11–13, and 16–18 Rejected Over McCaffrey and Greiner Appellant argues claims 1, 11, and 16 as a group. See Appeal Br. 5–8. We select claim 1 as representative. See 37 C.F.R. § 41.37(c)(1)(iv). We summarily sustain the rejection of claims 3, 5, 7–9, 12, 13, 17, and 18, which Appellant does not argue separately. See id. Regarding claim 1, the Examiner finds that McCaffrey teaches a data processing method that retrieves a data stream of records and aggregates the data records to form record packets (as sets of event data) of a size capacity determined based on a cache memory size of a data processing apparatus and stores the event data packets in a memory cache cluster. Final Act. 4–5. The Examiner finds that Greiner transmits record packets of a predetermined size capacity over communication threads. Id. The Examiner determines it would have been obvious to modify McCaffrey to provide individual data packets of a predetermined size as taught by Greiner to maximize the packet size for transferring to threads most efficiently. Id. at 6. Appellant argues that McCaffrey and Greiner do not teach or suggest “a predetermined size capacity determined based on a memory size of a cache memory” as claimed. Appeal Br. 5. Appellant asserts that the Office Action does not explain why McCaffrey teaches an overall amount of data packets having a predetermined size. Id. at 6. Appellant also argues that Greiner teaches to divide a digital image into a plurality of data packets of a predetermined size to transmit over a determined number of communication threads, but the predetermined size capacity is not determined based on a memory size of a cache memory as recited in claim 1. Id. Appeal 2020-003938 Application 15/595,880 4 Resolution of this issue turns on interpretation “a predetermined size capacity determined based on a memory size of a cache memory” as recited in claim 1. Appeal Br. 10 (Claims App.). The claim language requires a predetermined size to be “based on” a memory size of a cache memory. The term “based on” does not require a particular relationship between a predetermined size and a memory size of a cache memory. “[A] predetermined size . . . based on a memory size of a cache memory” could be a predetermined size that is less than, equal to, or greater than a memory size of a cache memory. Furthermore, claim 1 does not recite that the record packets are stored in a cache memory. The description of “a predetermined size capacity” of a packet in the Specification indicates that it can be any size relative to cache memory. In an implementation, an optimally-sized capacity for record packets 265 can be predetermined (at startup or compilation time) based on a factorable relationship to the size of the cache memory used in the associated system architecture. In some cases, packets are designed to have a direct relationship (1-to-1 relationship) to cache memory, having a capacity that is a 0th order of magnitude (i.e., 100) to the size of the cache. For example, record packets 265 are configured such that each packet is less than or equal to the size (e.g., storage capacity) of the largest cache on the target CPU. Restated, data records 260 can be aggregated into cache-sized packets. As an example, utilizing a computer system having a 64MB cache to implement the data analytics applications 145 yields record packets 265 having a predetermined size capacity of 64MB. By creating a record packet that is less than or equal to the size of a cache of the data analytics system 140, the record packet can be kept in the cache and accessed faster by tools than if it was stored in random access memory (RAM) or a memory disk. Hence, creating a record packet that is less than or equal to the size of a cache improves data locality. Appeal 2020-003938 Application 15/595,880 5 In other implementations, the predetermined size capacity for the record packets 265 can be other computational variations of, or derived from a mathematical relationship to, the size of the cache memory, resulting in packets having a maximum size that is smaller, or larger, than that of the cache. For instance, the capacity of a record packet 265 can be 1/10, or an -1 order of magnitude (i.e., 10-1), of the size of the cache memory. Spec. ¶¶ 34, 35 (emphasis added). The Specification therefore indicates a predetermined size capacity of packet records can be any size relative to cache memory. It can be the same size as cache memory. It can be smaller than or larger than cache memory. It can be a size factor ranging from 1/10 (10-1) to 10 times (101). Id. ¶ 35. Packet size can be variable and may be optimized based on parameters such as minimum latency and maximum amount of data. Id. ¶ 36. Therefore, we interpret “a predetermined size capacity determined based on a memory size of a cache memory” according to the plain meaning of the claim language interpreted in light of the Specification as a packet size that is less than, equal to, or larger than a cache memory size. In light of our interpretation, we agree with the Examiner’s finding that McCaffrey aggregates data records into record packets (event data) of a predetermined size capacity determined based on a memory size of a cache memory as claimed. McCaffrey batches process files by aggregating data records into packets (events) that can be stored in shared memory pool 314, which is a memory cache cluster. McCaffrey ¶¶ 14, 20, 21, 56–60, 63–72, 97, Figs. 3, 5. McCaffrey stores multiple packets in shared memory pool 314 indicating individual packet sizes are less than a cache memory size. See id. Thus, packet size is predetermined “based on” cache memory size. Appeal 2020-003938 Application 15/595,880 6 A skilled artisan would understand from McCaffrey’s teachings that individual records are aggregated into packets of event data that are stored in a common shared memory pool 314 (memory cache). Thus, a group of data packets (events) has a predetermined size capacity based on a memory size of a memory cache cluster such that the predetermined size of the group of packets is less than or equal to the cache memory size. As a consequence, the predetermined size of each individual packet of the plurality of packets also has a predetermined size that is less than a cache memory size. Under our interpretation of this limitation, claim 1 encompasses data packet sizes that are a predetermined size that is less than a cache memory size. McCaffrey explains why this predetermined sizing of event data based on cache memory size is used. McCaffrey performs distributed processing to improve efficiency by aggregating messages (records) in a time window to generate a set of event data (packets) for the time window and store the set of event data in a memory cache cluster. McCaffrey ¶ 14. The system distributes incoming data flows into multiple servers and processes them in a cluster of memory in real time. Id. ¶ 21. The stream processing system thus leverages a shared memory pool (memory cache cluster) to distribute event processing. Id. ¶¶ 57, 58. By aggregating data records into event packets, larger volumes of data can be processed with low latency. Id. ¶¶ 59, 60. Stream writer 306 aggregates individual messages (records) from a time window based on a hierarchy of attributes to generate event packets that are stored in memory cache cluster 314, which is a shared memory pool. Id. ¶ 64. The packets of event data are sized in a predetermined size to fit in memory cache cluster 314. Id. ¶¶ 64, 65, 97. Thus, individual packets are formed in a predetermined size to fit in cache memory. See Final Act. 4–5. Appeal 2020-003938 Application 15/595,880 7 In view of our interpretation of the disputed limitation and McCaffrey’s teachings of this limitation, we sustain the Examiner’s rejection on the basis of McCaffrey’s teachings alone. See In re Bush, 296 F.2d 491, 496 (CCPA 1961) (holding Board’s affirmance of a two-reference rejection based on the teachings of one of the two references without relying on the teachings of the other reference “does not amount to rejection on a new ground”). Moreover, Greiner teaches that it is known to size individual data packets with a predetermined size capacity to facilitate efficient processing of the packets over communication threads. Final Act. 5; Greiner ¶ 18. This teaching supports the teachings of McCaffrey that the parallel processing of multiple data strings is optimized by forming data streams into packets of a predetermined size relative to a cache memory to facilitate that processing. Accordingly, we sustain the rejection of claim 1, 11, and 16. We also sustain the rejection of the remaining claims that were not argued separately. Claims 4, 14, 19, 21, and 22 Rejected Over McCaffrey, Greiner, and Stevens Appellant does not argue the rejection of claims 4, 14, 19, and 21. Thus, we summarily sustain the rejection of those claims. Regarding claim 22, the Examiner finds that McCaffrey and Greiner teach determining a size of a record packet based on the predetermined size capacity of the record packet, and Stevens teaches “determining a size for the record packet based on . . . a threshold latency time associated with processing the record packet” as claimed. Final Act. 12–13. The Examiner finds that Stevens teaches that threshold latency time increases if too much load is on the system and latency is minimized by caching data. Id.; Ans. 5. Appeal 2020-003938 Application 15/595,880 8 Appellant argues that Stevens teaches a computer system in which a Broadcaster receives snippets and sends them to a node in paragraph 71, but paragraph 73, which is relied on by the Examiner, teaches that latency of the system increases if the Broadcaster cannot send snippets as fast as it receives them. Appeal Br. 8. Appellant asserts that paragraph 93 of Stevens teaches “caching real-time data packets to minimize retrieval latency.” Id. Claim 22 depends from claim 1 and recites in pertinent part “for each record packet of a subset of the plurality of record packets: determining a size for the record packet based on the predetermined size capacity of the record packet and a threshold latency time associated with processing the record packet.” Appeal Br. 14 (Claims App.). As discussed above for the rejection of claim 1, McCaffrey teaches a size of each record packet having a predetermined size capacity. McCaffrey also teaches that this predetermined size capacity resulting from aggregation provides a stream processing system with the capability of “processing large volumes of data types in a low-latency, light-weight fashion” so “[r]eal-time stream processing can help with low latency aggregate.” McCaffrey ¶ 60. Appellant’s Specification does not describe a “threshold latency time associated with processing the record packet.” It does describe placing data in a cache memory that is readily accessible to computing elements of CPU and RAM during processing to “realize reductions in latency that may be experienced in accessing data.” Spec. ¶ 16. Stevens also teaches to cache data packets to minimize retrieval latency. See Stevens ¶ 93; Appeal Br. 8. Therefore, Stevens sizes data packets to be stored in cache to minimize retrieval latency just as the Specification describes sizing data packets that are stored in cache memory to reduce latency in accessing the data. Appeal 2020-003938 Application 15/595,880 9 Appellant’s Specification also teaches that data aggregation involves a tradeoff between increased synchronization between threads resulting from using smaller sized packets and increased latency in processing records into packets when using larger sized packets. Spec. ¶ 35. If a first packet size is 2 MB, it can include as many records as possible optimally. Id. ¶ 36. If a second record packet is generated and passed as soon as it is deemed ready, the second record packet size may reach only 1 KB, but the smaller second record packet size can decrease the time latency associated with preparing and packetizing data prior to being processed by the workflow. Id. Stevens similarly teaches that the Broadcaster 904 must be able to send snippets as fast as it receives them, otherwise the latency of the system increases as Appellant acknowledges. Stevens ¶ 73; Appeal Br. 8. Stevens teaches that snippet size affects latency. As snippet size increases (when Broadcaster 904 cannot send snippets as fast as it receives them), latency increases. The Examiner reasons that managing packet size to reduce or prevent latency as taught by Stevens in the modified method of McCaffrey would provide similar improved results of more efficient processing as Stevens teaches. Ans. 6; Final Act. 13–14. As a general matter, “if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill.” KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 417 (2007). Further, an implicit motivation to combine exists for improvements that make a product faster and more efficient. See DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1368 (Fed. Cir. 2006). Accordingly, we sustain the rejection of claim 22. Appeal 2020-003938 Application 15/595,880 10 Claims 6, 15, and 20 Rejected Over McCaffrey, Greiner, and Heath Appellant does not present argument for the rejection of claims 6, 15, and 20. See Appeal Br. 5–9. Thus, we summarily sustain this rejection. Claim 10 Rejected Over McCaffrey, Greiner, and Stephens Appellant does not present argument for the rejection of claim 10. See Appeal Br. 5–9. Thus, we summarily sustain this rejection. CONCLUSION In summary: Claims Rejected 35 U.S.C. § Reference(s)/ Basis Affirmed Reversed 1–3, 5, 7–9, 11–13, 16–18 103 McCaffrey, Greiner 1–3, 5, 7–9, 11–13, 16–18 4, 14, 19, 21, 22 103 McCaffrey, Greiner, Stevens 4, 14, 19, 21, 22 6, 15, 20 103 McCaffrey, Greiner, Heath 6, 15, 20 10 103 McCaffrey, Greiner, Stephens 10 Overall Outcome 1–22 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation