Peter Meier et al.Download PDFPatent Trials and Appeals BoardJan 3, 202015273591 - (D) (P.T.A.B. Jan. 3, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/273,591 09/22/2016 Peter Meier P28504USC1 (119-1119USC1) 2478 61947 7590 01/03/2020 Apple - Blank Rome c/o Blank Rome LLP 717 Texas Avenue, Suite 1400 HOUSTON, TX 77002 EXAMINER GILES, NICHOLAS G ART UNIT PAPER NUMBER 2699 NOTIFICATION DATE DELIVERY MODE 01/03/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): houstonpatents@blankrome.com mbrininger@blankrome.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte PETER MEIER and THOMAS SEVERIN Appeal 2018-007426 Application 15/273,591 Technology Center 2600 BEFORE ELENI MANTIS MERCADER, DENISE M. POTHIER, and STEVEN M. AMUNDSON, Administrative Patent Judges. POTHIER, Administrative Patent Judge. DECISION ON APPEAL Appeal 2018-007426 Application 15/273,591 2 STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1,2 appeals from the Examiner’s decision to reject claims 22–41. Appeal Br. 7. Claims 1–21 have been canceled. Id. at 14 (Claims App.). We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. CLAIMED SUBJECT MATTER The claims are directed to “a method and of system capable of providing multimedia information to a user at reduced battery consumption.” Spec. ¶ 1. Claim 22 is reproduced below: 22. An information system, comprising: a camera; a processor operatively coupled to the camera; a device operatively coupled to the processor; and a memory device operatively coupled to the camera, the processor and the device, the memory device comprising instructions executable by the processor to: obtain, in a low-power mode of the information system, an image captured by the camera; extract, in the low-power mode, a first feature of an object in the image; generate, in the low-power mode, a higher level descriptor of the first feature; 1 We use the word Appellant to refer to “applicant” as defined in 37 C.F.R. § 1.42(a) (2016). Appellant identifies the real party in interest as Apple, Inc. Appeal Br. 3. 2 Throughout this opinion, we refer to the Final Action (Final Act.) mailed August 14, 2017, the Appeal Brief (Appeal Br.) filed April 16, 2018, the Examiner’s Answer (Ans.) mailed May 10, 2018, and the Reply Brief (Reply Br.) filed July 10, 2018. Appeal 2018-007426 Application 15/273,591 3 determine, in the low-power mode, that the higher level descriptor matches a reference object feature descriptor; and activate, in response to the determination, a high- power mode of the information system. Appeal Br. 14 (Claims App.). We have reviewed the Examiner’s rejections in light of Appellant’s arguments presented in this appeal. Arguments which Appellant could have made, but did not make in the briefs are deemed to be waived. See 37 C.F.R. § 41.37(c)(1)(iv). On the record before us, we are unpersuaded the Examiner has erred. Except as noted, we adopt as our own the findings and reasons set forth in the rejections from which the appeal is taken and in the Examiner’s Answer. THE DOUBLE-PATENTING REJECTIONS Claims 22, 24–31, 33–38, 40, and 41 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9–12 of U.S. Patent No. 9,560,273 and Holz. Final Act. 6–8. Claim 23, 32, and 39 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of U.S. Patent No. 9,560,273, Holz, and Lord. Id. at 8–10. Appellant does not challenge the above rejections. See generally Appeal Br.3 Accordingly, we summarily sustain these rejections. See Hyatt v. Dudas, 551 F.3d 1307, 1314 (Fed. Cir. 2008) (explaining that when 3 These rejections were requested to be held in abeyance on page 6 of Reply to Final Office Action mailed August 14, 2017, submitted November 6, 2017. Appeal 2018-007426 Application 15/273,591 4 appellant fails to contest a ground of rejection, the Board may affirm the rejection without considering its substantive merits); see also 37 C.F.R. § 41.37(c)(1)(iv); MANUAL OF PATENT EXAMINING PROCEDURE (MPEP) § 1205.02 (9th ed. rev. 08.2017 Jan. 2018) (“If a ground of rejection stated by the examiner is not addressed in the appellant’s brief, appellant has waived any challenge to that ground of rejection and the Board may summarily sustain it.”). REFERENCES The prior art relied upon by the Examiner is: Name Reference Date Lord US 2011/0034176 Al Feb. 10, 2011 Holz US 2014/0192206 Al July 10, 2014 THE OBVIOUSNESS REJECTION Claims 22–41 are rejected under 35 U.S.C. 103 as being unpatentable over Holz and Lord. Final Act. 10–14. The Examiner determined Holz teaches many of the features in claim 22, including a memory device comprising instructions executable on a processor to obtain an image captured by a camera in lower-power mode and activating a high-power mode of the camera in response to detecting an object. Final Act. 10–11 (citing Holz ¶¶ 3–4, 18, 21, 25–29); Ans. 3–4, 7–8. The Examiner stated Holz is silent related to the “extract,” “generate,” and “determine” elements in claim 22 but turns to Lord in combination with Holz to teach these limitations (Final Act. 12), namely “to show that the [captured] object of interest can be detected” as claimed. Ans. 4. Appeal 2018-007426 Application 15/273,591 5 Appellant argues claims 22, 24–31, 33–38, 40, and 41 as a group. Appeal Br. 7–11. We select claim 22 as representative. See 37 C.F.R. § 41.37(c)(1)(iv). Claims 22, 24–31, 33–38, 40, and 41 When addressing the recited “extract, in the low-power mode, a first feature of an object in the image,” “generate, in the low-power mode, a higher level descriptor of the first feature,” and “determine, in the low-power mode, that the higher level descriptor matches a reference object feature descriptor” elements in claim 22, the Examiner discussed Lord and proposed combining Lord with Holz. See Final Act. 12–13; see Ans. 3–5, 8. The Examiner stated Lord teaches identifying local features in an unknown image, generating keypoint data from the local features, comparing data with a reference database, and determining the closest matching image in the database using a Euclidian distance-like measurement to determine the detected object. Final Act. 12 (citing Lord ¶¶ 253, 739, 900–910); Ans. 4–5, 8. Appellant argues Holz and Lord collectively do not teach extracting a feature of an object in the image, generating a higher level descriptor of the feature, and determining the higher level descriptor matches a reference object feature descriptor in a low-power mode. Appeal Br. 7–11; Reply Br. 2–4. More specifically, Appellant argues Lord identifies keypoints, which are local image features, and screened keypoints, which are a subset of the keypoints, but that the screened keypoints are not a descriptor of an image’s feature as claimed and as mapped by the Examiner. Appeal Br. 8. Appellant contends the screened keypoints generate a subset of keypoints, but none of these keypoints are a descriptor of a particular keypoint and thus Appeal 2018-007426 Application 15/273,591 6 none is “a higher level descriptor of the first feature” as claimed. Id.; Reply Br. 3–4. Appellant further asserts the Examiner is mapping the recited “first feature” and the “higher level descriptor” to the same components in Lord. Appeal Br. 8–9; Reply Br. 4. Appellant also argues Holz detects brightness or reflectivity changes in the environment rather than detecting a particular object or its features and extracting an object’s features. Appeal Br. 9–10; Reply Br. 2–3. Appellant even further asserts Lord teaches away from performing a SIFT (scale invariant feature transform) in a low-power mode. Appeal Br. 10 (citing Lord ¶ 915); Reply Br. 3 (citing Lord ¶ 918). ISSUES Under § 103, has the Examiner erred by determining that Holz and Lord collectively would have taught or suggested a memory device comprising instructions executable by the processor in low-power mode to: (1) “extract . . . a first feature of an object in the image”; (2) “generate . . . a higher level descriptor of the first feature”; and (3) “determine . . . that the higher level descriptor matches a reference object feature descriptor”? ANALYSIS Based on the record before us, we sustain the Examiner’s rejection. We begin by construing the key disputed limitation of claim 22, “a higher level descriptor of the first feature [of an object in the captured image].” Appeal Br. 14 (Claims App.). Turning to the Specification, as did the Examiner (Ans. 4), the disclosure describes “at least one of the current feature descriptor or the reference feature descriptor is a higher level description of an object, making it invariant to scale and/or rotation and/or Appeal 2018-007426 Application 15/273,591 7 light” (Spec. ¶ 49) and provides an example of “local feature descriptors as SIFT” (id. ¶ 11). Based on the Specification, some SIFTs, which are local or current feature descriptors (see id. ¶ 11), can be “a higher level descriptor of the first feature” of an object in the image as claim 22 recites. See id. ¶ 49. The Specification also describes “a feature descriptor [that] is determined in order to enable the comparison and matching of features . . . might be an n-dimensional binary vector” (id. ¶ 12) and “a descriptor can advantageously be a vector, which is derived from a 2D image or a part of a 2D image or 3D data, which is created by not just transforming pixels into a different color space or normalizing their values” (id. ¶ 89). Given the broadest reasonable construction of “a higher level descriptor of the first feature” in light of the Specification as it would be interpreted by one of ordinary skill in the art, this “descriptor” is data or data structure, including but not limited to a SIFT or vector, that describes at least one feature of an object captured in an image. Turning now to the prior art, Holz’s summary discloses known cameras, operating in a low-power mode, capturing images and switching to high-power mode “[o]nce an object of interest has been detected in the field of view of the cameras” (e.g., a person enters the field of view). Holz ¶ 3. Holz further describes “motion detection that triggers a ‘wake-up’ of the motion-capture system can be accomplished in several ways.” Id. ¶ 4. Although Holz describes using brightness to indicate an object’s presence (id. ¶¶ 4, 18, 26), Holz discloses other existing detection techniques, including “[i]n some implementations, images captured by the camera(s) at a very low frame rate are analyzed for the presence or movement of objects of interest,” while in a low power mode. Id. ¶ 4, quoted in Ans. 4; see Final Appeal 2018-007426 Application 15/273,591 8 Act. 13 (stating “Holz was shown to, while in a lower power mode, detect the moving object by image analysis of captured images.”). Thus, contrary to Appellant’s assertion (Appeal Br. 9–10), Holz is not completely silent regarding detecting a particular object and at least suggests some known implementations where features of an object can be extracted to detect the objection’s presence (e.g., analyzing the captured image as Holz describes) in a low-power mode. This suggestion in Holz further leads an ordinary skilled artisan to look to Lord for known teachings related how to analyze for the presence of an object within an image and thus detect an object of interest in an image. Ans. 4 (stating “that Lord was used to show that the object of interest can be detected” as claimed). For example, Lord teaches an image analysis/object recognition technique (Lord ¶ 897; see Ans. 9) that searches an image, extracts features from the image, and recognizes an object within an image, including using a SIFT image recognition technique. See Lord, Title “Image Search, Feature Extraction, Pattern Matching, Etc.” above ¶ 879; id. ¶ 896. Lord discloses the SIFT strategy “works by identification and description— and subsequent detection—of local image features.” Id. ¶ 900. Lord specifically discloses SIFT involves extracting local image features in a reference image (e.g., keypoints) (id. ¶¶ 900–901) and computing a keypoint descriptor (e.g., a histogram) (id. ¶ 909), which also leads to describing objects by a set of SIFT features (e.g., a SIFT feature vector) (id. ¶¶ 900, 909). See Final Act. 12 (citing Lord ¶¶ 900–910). Lord thus describes not only extracting object features (e.g., keypoints) of a reference image but also creating a reference object feature descriptor (e.g., keypoint descriptors or a histogram), which describes reference object Appeal 2018-007426 Application 15/273,591 9 features (e.g., a SIFT feature vector). As such, Lord teaches creating “a reference object feature descriptor” (e.g., SIFT features for reference image) as recited in claim 22. Lord further states the image analysis/object recognition technique (Lord ¶ 897) involves processing an unknown image in a similar manner, including extracting local image features in the unknown image (e.g., generate keypoint data) (see id. ¶¶ 900–901, 910). As such, Lord teaches “extract[ing] a first feature of an object in the image” (e.g., extracting local image features in the unknown image) as recited in claim 22. We therefore disagree with Appellant that Lord’s extracting features of the unknown image mapped to the “first feature” in claim 22 (see Final Act. 12) fails to teach the recited “instructions executable by the processor to: . . . extract . . . a first feature of an object in the image” as recited. Appeal Br. 7. Moreover, when combined with Holz, the combination teaches to perform this “extract” function “in low-power mode” as claim 22 also recites. Lord further describes the above process compiles a reference database, suggesting that not only is keypoint data stored in the database but also reference keypoint descriptors and SIFT feature vectors for subsequent detection. Lord ¶¶ 900, 910. For example, Lord also discusses identifying a closest-matching image in the database by using “a Euclidean distance calculation.” Id. ¶ 910. Although a Euclidean distance calculation may involve one dimension calculation,4 existing sources describe this distance 4 Euclidean Distance in ‘n’-Dimensional Space, available at https://hlab.stanford.edu/brian/euclidean_distance_in.html (last visited December 11, 2019). Appeal 2018-007426 Application 15/273,591 10 calculation involves two or three dimensions.5 Moreover, Lord states as few as three SIFT features from an object are enough to describe an object (Lord ¶ 900), suggesting the disclosed closest-matching image technique involving Euclidean distance calculation requiring at least three SIFT features (e.g., a vector) from the unknown image to detect a match. As such, Lord suggests to one skilled in the art “generat[ing] . . . a higher level descriptor of the first feature” of the captured image and “determin[ing] . . . that the higher level descriptor matches a reference object feature descriptor” (e.g., matching the SIFT feature vector of the reference image to the SIFT feature vector of the unknown image) as claim 22 recites. We recognize that the Examiner discusses and maps keypoint data to “a higher level descriptor” and “a reference object feature descriptor” in claim 22 at one point in the rejection. Final Act. 12; Ans. 5. But the rejection further states the recited “generate . . . a higher level descriptor of the first feature” and “determine . . . that the higher level descriptor matches a reference object feature descriptor” is taught by paragraphs 900 through 910 in Lord, which addresses the above-discussed “closest matching image technique” in Lord. Additionally, the Examiner specifically discusses Lord’s teaching related to the “closest-matching image” (Final Act. 12), “where the closest-matching image in the database is identified by a Euclidian distance-like measure” (Lord ¶ 910) when addressing the “generate” and “determine” elements in claim 22. Final Act. 12. Moreover, 5 Glossary - Euclidean distance, available at http://rosalind.info/glossary/euclidean-distance/ (last visited December 11, 2019). Appeal 2018-007426 Application 15/273,591 11 the Examiner relies on claim 22’s discussion for rejecting claim 23, which further limits the “determine” function to include the “determine a distance measure between the higher level descriptor and each of the plurality of reference object feature descriptors.” See id. at 13. Given the foregoing explanation, we are not persuaded by Appellant’s arguments related to Lord’s keypoint data or that Lord’s disclosed keypoint data are not “a higher level descriptor of the first feature” or “a reference object feature descriptor” in claim 22. Appeal Br. 7–9; Reply Br. 3–4. That is, these arguments do not address the Examiner’s additional findings related to SIFTs and the Euclidean distance-like measure. That Appellant chooses to focus on paragraphs 900 through 903 of Lord in the briefing (see Appeal Br. 7–10) without addressing cited paragraphs 909 and 910 (see generally Appeal Br.) is telling in this regard. Regarding the “low-power mode” limitation, Appellant contends Lord teaches away from performing a SIFT in a low-power mode because Lord “indicates that a customized FPGA chip, presumably running at full power, can barely perform SIFT at a full frame rate in the first place, much less on a mobile phone while in a lower-power mode.” Appeal Br. 10 (citing Lord ¶ 915); Reply Br. 3 (citing Lord ¶¶ 918–919). We are not persuaded. First, the combination proposes to use Lord’s technique with Holz’s device (e.g., computer system 300), not specifically a cell phone device. See Final Act. 11–12. Second, Lord teaches at least one implementation (e.g., Bonato’s) operating at a rate faster than the cell phone implementation. Lord ¶ 915; see id. ¶ 912. Third, Lord teaches the implementations “can be adapted for use on a cell phone platform.” Id. ¶ 918 (emphasis added). Thus, Lord does not limit its technique to cell phones. Nor is claim 22 Appeal 2018-007426 Application 15/273,591 12 limited to a cell phone. Appeal Br. 14 (Claims App.). Fourth, there is insufficient persuasive evidence on this record to substantiate Appellant’s contention that Lord is operating at “full power” or “can barely perform SIFT at a full frame rate” as argued. Appeal Br. 10. For the foregoing reasons, Appellant has not persuaded us of error in the rejection of independent claim 22 and claims 24–31, 33–38, 40, and 41, which are not argued separately. Claims 23, 32, and 39 Appellant separately argues claims 23, 32, and 39 as a group. Appeal Br. 7, 11–12. We select claim 23 as representative. See 37 C.F.R. § 41.37(c)(1)(iv). Appellant asserts Holz and Lord fail to teach the “determine a distance measure between the higher level descriptor and each of the plurality of reference object feature descriptors” limitation in claim 23 because Lord discusses using a “‘best-bin-first’ [BBF] algorithm.” Appeal Br. 11 (quoting Lord ¶ 910). According to Appellant, this algorithm is an approximate algorithm that cuts off its approach “after a specific number of nearest bins have been explored.” Id. at 12 (quoting Lowe, David G., Distinctive Image Features from Scale-Invariant Keypoints 20-216). Appellant thus asserts the BBF algorithm does not perform the distance calculation “for each neighbor” or “each of the plurality of reference object feature descriptors” as claim 23 recites. Id. This argument is unavailing. Lord states “[a] ‘best-bin-first’ algorithm is typically used instead of a pure Euclidean distance calculation. 6 Available at https://www.cs.ubc.ca/-lowe/papers/ijcv04.pdf. Appeal 2018-007426 Application 15/273,591 13 . . .” Lord ¶ 910 (emphasis added). Thus, although Lord teaches a typical approach, Lord does not rule out using the other described “pure Euclidean distance calculation.” Id. (emphasis added). Lord thus teaches and suggests algorithms other than BBF, including an approach where the “distance measure between the higher level descriptor and each of the plurality of reference object feature descriptors” as recited claim 23 is performed. For the foregoing reasons, Appellant has not persuaded us of error in the rejection of independent claim 23 and claims 32 and 39, which are not argued separately. CONCLUSION In summary: Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 22–41 Nonstatutory double patenting 22–41 22–41 103 Holz, Lord 22–41 Overall Outcome 22–41 TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Notice of References Cited Application/Control No. Applicant(s)/Patent Under Patent Appeal No. Examiner Art Unit Page 1 of 1 U.S. PATENT DOCUMENTS * Document Number Country Code-Number-Kind Code Date MM-YYYY Name Classification A US- B US- C US- D US- E US- F US- G US- H US- I US- J US- K US- L US- M US- FOREIGN PATENT DOCUMENTS * Document Number Country Code-Number-Kind Code Date MM-YYYY Country Name Classification N O P Q R S T NON-PATENT DOCUMENTS * Include as applicable: Author, Title Date, Publisher, Edition or Volume, Pertinent Pages) U V W X *A copy of this reference is not being furnished with this Office action. (See MPEP § 707.05(a).) Dates in MM-YYYY format are publication dates. Classifications may be US or foreign. U.S. Patent and Trademark Office PTO-892 (Rev. 01-2001) Notice of References Cited Part of Paper No. 12/19/2019 'n'-Dimensional Euclidean Distance Euclidean Distance In 'n'-Dimensional SRace Euclidean distance is a measure of the true straight line distance between two points in Euclidean space. One Dimension In an example where there is only 1 variable describing each cell (or case) there is only 1 Dimensional space. The Euclidean distance between 2 cells would be the simple arithmetic difference: Xcell1 - Xcell2 (eg. APHWcell1 = 1.11603 ms and APHWc81110 = 0.97034 ms; they are (1.11603 - 0.97034) = 0.14569 ms apart). Two Dimensions This has already been described here. Three Dimensions With 3 variables the distance can be visualized in 30 space such as that seen below. 0. E ct: (_) (f) a.. w (l) APHW(ms) 2 ~ ·#' I C ----- Distance 'e' would be the distance between cell 1 & cell 2. We could determine it using Pythagora's theorem as seen previously, but we first need to find the value of 'd' using values 'a' and 'b'. 1. a= APHWcell1 -APHWcell2 2. b;;;; mTaucell1 - mTaucell2 3. d2 ;;;; a2 + b2 and https://hlab.stanford.edu/brian/euclidean_distance_in.html 1/3 12/19/2019 'n'-Dimensional Euclidean Distance 4. c = eEPSCcell1 - eEPSCcell2 5. and e2 = d2 + c2 substituting a2 + b2 for c2 from line 3 you get 6. e2 = a2 + b2 + c2 substituting the values for 'a', 'b' & 'c' from 1, 2 & 4 7. e2 = (APHWcell1 - APHWce112)2 + (mTaucell 1 - mTauce112)2 + (eEPSCcell1 - eEPSCce112)2 This way each of the dimensions (APHW, mTau & eEPSC amplitude) are added together to get the Euclidean distance in one equation. More Than 3 Dimensions ('n'-dimensions) While pretty much impossible to visualize Pythagora's principle can be applied to more than 3 dimensions. To generalize this we could say that 'e', seen above, is the distance 'D' between any 2 cells (cell i and cell j): Dij. The number of dimensions being worked in depends on the number of variables each cell (case) is described by. If each cell is described by 3 variables then it is 30 space, if there are 20 variables then it is 200 space. Therefore 'n' variables is represented in 'n'-dimensional space. Each cell (case) will have a value 'x' for each variable (the variables will be represented from the first 'v' = 1 to the last 'v' = 'n' so that 'v' = 1 to 'n'). So distance in each dimension (like in 1, 2 & 4 above) could be represented by: xvi - Xvj which in English would be: The value of vth variable for cell i minus the value of the vth variable for cell j. Which, remembering Pythagora's Theorem in English: The square of the hypotenuse is equal to the sum of the squares of the other two sides. (P-erfectly nerdy_Eythagora's joke here), itself in English would be: The square of the Distance between 2 cells (i & j; Dij) is equal to the sum p::), from the first variable (v = 1) to the last variable (n), of the squares of the distances in each dimension (which is found by finding the differences for each cell's value for each variable (cell i's value for the vth variable being Xvi and cell j's value being Xvj)). Back To Measuring Back To Main Pag~ https://hlab.stanford.edu/brian/euclidean_distance_in.html 2/3 12/19/2019 Glossary Euclidean distance ROSALIND I Glossary I Euclidean distance The Euclidean distance between two points in either the plane or 3-dimensional space measures the length of a segment connecting the two points. It is the most obvious way of representing distance between two points. The Pythagorean Theorem can be used to calculate the distance between two points, as shown in the figure below. If the points (xi, yi) and (x2, y2) are in 2-dimensional space, then the Euclidean distance between them is J(x2 - x1) 2 + (y2 - Y1) 2. Y2 -y1 For points (xi, y1, z1) and ( x2, y2, z2) in 3-dimensional space, the Euclidean distance between them is J(x2 - xi)2 + (y2 -yi)2 + (z2 - z1)2. For example, the Euclidean distance between (-1, 2, 3) and (4, 0, -3) is J25+4+36 = y'65. Wikipedia rosalind.info/glossary/euclidean-distance/ 1/1 12/19/2019 'n'-Dimensional Euclidean Distance Euclidean Distance In 'n'-Dimensional SRace Euclidean distance is a measure of the true straight line distance between two points in Euclidean space. One Dimension In an example where there is only 1 variable describing each cell (or case) there is only 1 Dimensional space. The Euclidean distance between 2 cells would be the simple arithmetic difference: Xcell1 - Xcell2 (eg. APHWcell1 = 1.11603 ms and APHWc81110 = 0.97034 ms; they are (1.11603 - 0.97034) = 0.14569 ms apart). Two Dimensions This has already been described here. Three Dimensions With 3 variables the distance can be visualized in 30 space such as that seen below. 0. E ct: (_) (f) a.. w (l) APHW(ms) 2 ~ ·#' I C ----- Distance 'e' would be the distance between cell 1 & cell 2. We could determine it using Pythagora's theorem as seen previously, but we first need to find the value of 'd' using values 'a' and 'b'. 1. a= APHWcell1 -APHWcell2 2. b;;;; mTaucell1 - mTaucell2 3. d2 ;;;; a2 + b2 and https://hlab.stanford.edu/brian/euclidean_distance_in.html 1/3 12/19/2019 'n'-Dimensional Euclidean Distance 4. c = eEPSCcell1 - eEPSCcell2 5. and e2 = d2 + c2 substituting a2 + b2 for c2 from line 3 you get 6. e2 = a2 + b2 + c2 substituting the values for 'a', 'b' & 'c' from 1, 2 & 4 7. e2 = (APHWcell1 - APHWce112)2 + (mTaucell 1 - mTauce112)2 + (eEPSCcell1 - eEPSCce112)2 This way each of the dimensions (APHW, mTau & eEPSC amplitude) are added together to get the Euclidean distance in one equation. More Than 3 Dimensions ('n'-dimensions) While pretty much impossible to visualize Pythagora's principle can be applied to more than 3 dimensions. To generalize this we could say that 'e', seen above, is the distance 'D' between any 2 cells (cell i and cell j): Dij. The number of dimensions being worked in depends on the number of variables each cell (case) is described by. If each cell is described by 3 variables then it is 30 space, if there are 20 variables then it is 200 space. Therefore 'n' variables is represented in 'n'-dimensional space. Each cell (case) will have a value 'x' for each variable (the variables will be represented from the first 'v' = 1 to the last 'v' = 'n' so that 'v' = 1 to 'n'). So distance in each dimension (like in 1, 2 & 4 above) could be represented by: xvi - Xvj which in English would be: The value of vth variable for cell i minus the value of the vth variable for cell j. Which, remembering Pythagora's Theorem in English: The square of the hypotenuse is equal to the sum of the squares of the other two sides. (P-erfectly nerdy_Eythagora's joke here), itself in English would be: The square of the Distance between 2 cells (i & j; Dij) is equal to the sum p::), from the first variable (v = 1) to the last variable (n), of the squares of the distances in each dimension (which is found by finding the differences for each cell's value for each variable (cell i's value for the vth variable being Xvi and cell j's value being Xvj)). Back To Measuring Back To Main Pag~ https://hlab.stanford.edu/brian/euclidean_distance_in.html 2/3 Copy with citationCopy as parenthetical citation