Tag: webdev

  • Understanding React Context

    TL;DR

    // models/IAuthDatum.ts
    export interface IAuthDatum {
      token?: string;
    }
    
    // components/AuthProvider/index.tsx
    const defaultAuthDatum: IAuthDatum = { token: undefined };
    
    export const AuthContext = createContext<{
      authState: IAuthDatum;
      setAuthState: Dispatch<SetStateAction<IAuthDatum>>;
    }>({
      authState: defaultAuthDatum,
      setAuthState: () => {},
    });
    
    export const AuthProvider: React.FunctionComponent = ({ children }) => {
      const [authState, setAuthState] = useState(defaultAuthDatum);
      return <AuthContext.Provider value={{ authState, setAuthState }}>{children}</AuthContext.Provider>;
    };
    
    // index.tsx
    ReactDOM.render(
      <DecksProvider>
        <AppEntry />
      </DecksProvider>
    document.getElementById('root'));
    
    // pages/home/index.tsx
    export const Home: NextPage = () => {
      const { authState, setAuthState } = useContext(AuthContext);
      console.log(authState);
      return (
          <div onClick={() => setAuthState({ token: "token set!" })}>
            {"Token value: "+authState.token}
          </div>
      );
    };

    Intro

    I tried context once and didn’t especially like it. Part of the supposed appeal is that it is built into React and therefore ostensibly easier to set up than redux. However, I found the setup to involve lots of parts, such that it felt negligibly less complex than redux.

    Anyhow, I’ve decided to give it another go only this time I will try to actually understand what is going on — and spell it out here — rather than just copy/paste boilerplate code.

    Focus on Hooks

    The official docs introduce you to context but only, it seems, with older “class-centric” (or “no-hooks”) react patterns. I have no interest in class-centric react at this point, so I had to track down a separate tutorial that focuses entirely on hooks. The first one I found from Google by Dave Ceddia was great! The rest of this article is very much me rehashing what Dave wrote there for my own long-term memory benefits; if you’re here to learn about context, you might well want to you go there.

    I quickly realized that the issues I had had with context, like so many things in life, is that I started with something complex, whereas you need to start with something simple to really get what is going on.

    So What’s Going On?

    In the original class-centric way of doing things, and starting off super simple, you create and use a react Context like so:

    import React from "react";
    import ReactDOM from "react-dom";
    
    // Create a Context
    const NumberContext = React.createContext(42);
    // It returns an object with 2 values:
    // { Provider, Consumer }
    
    function App() {
      // Use the Provider to make a value available to all
      // children and descendants
      return (
        <NumberContext.Provider value={42}>
          <div>
            <Display />
          </div>
        </NumberContext.Provider>
      );
    }
    
    function Display() {
      // Use the Consumer to grab the value from context
      // Notice this component didn't get any props!
      return (
        <NumberContext.Consumer>
          {value => <div>The answer is {value}.</div>}
        </NumberContext.Consumer>
      );
    }
    
    ReactDOM.render(<App />, document.querySelector("#root"));

    The key thing here is that the Context gives you two higher-order components: the Provider and the Consumer. In its simplest usage, you feed a value to the Provider, and then that value is made available to your Consumer as illustrated in the above code. The Consumer will trigger a re-render of its children whenever the value of the context changes. (How to sensibly update the value of the Context is discussed below.)

    It’s also important to understand the difference between the two places where the value of the Context is being set: the argument passed to React.createContext(), and the prop labelled “value” passed to the Provider. According to the official documentation:

    The defaultValue argument [passed to React.createContext] is only used when a component does not have a matching Provider above it in the tree. This default value can be helpful for testing components in isolation without wrapping them.

    – ReactJs.org

    In other words, you can use the Consumer of a context without its Provider, but my understanding is that this will only let you access the original “default” value. If you want to be able to update the value of the Context then you need to use the Provider.

    To summarize so far:

    • Think of the thing that gets created by React.createContext(value) as being the external “store” of data that you export to your app in order to equip any given component with either a Provider or a Consumer of that value.
    • The Consumer will trigger a re-render of its children whenever the value of its context changes.
    • In practice, you always need to use the Provider of the Context in order to update the value of the Context; this makes the default value passed to the React.createContext() function essentially redundant/obsolete, and you will therefore often seen this default value left out or given a bogus/placeholder data structure.

    useContext

    The above “class-centric” pattern is ugly. The Consumer does not have any props, and the variable “value” has to be understood as defined within it. Thankfully, we don’t need to use this pattern thanks to the useContext hook.

    // import useContext (or we could write React.useContext)
    import React, { useContext } from 'react';
    
    // ...
    
    function Display() {
      const value = useContext(NumberContext);
      return <div>The answer is {value}.</div>;
    }

    This is much nicer: now we don’t need to wrap components with the Consumer component, and the variable value is declared explicitly and we can therefore call it whatever we like.

    Updating Context Within Nested Components

    As we just saw, one sets/updates the value of the Context via the prop named “value” passed to the Provider component. This fact is key to understanding how we can update the value of the Context from a component nested within a Consumer of that Context (viz. a React.FC using the useContext hook).

    The official documentation gives an example of how to achieve this by storing the latest value of the Context within the state of the top-level component that renders the Provider, as well as a callback function within that top-level component that will update that state. The state and callback are then passed within a single object to the prop labelled “value” of the Provider (thus setting the value of the Context).

    The nested component then extracts the object with the state and callback from the Context using the useContext hook. The callback can be triggered from the nested component, causing the state of the top-level component to update, causing the Provider to re-render, causing the value of the Context to change, causing the nested component to re-render.

    This is all well and good, except that it would be much nicer to abstract the management of state out of the top-level component and into one or more files that not only define the Context, but also the manner in which its value can be updated.

    We can achieve this by extracting the Provider into a component dedicated to this very purpose, so that our top-level component appears to wrap the rest of the app more neatly.

    const MyContext = React.createContext({
      state: defaultState,
      setState: () => {}
    });
    
    const { Provider } = MyContext;
    
    const MyContextProvider = ({ children }) => {
      const [state, setState] = useState(0);
      return (
        <Provider value={{state, setState}}>
          {children}
        </Provider>
      );
    };
    
    const MyContextConsumer = () => {
      const {state, setState} = useContext(MyContext);
      return (
        <>
          <h1> {"Count: " + state} </h1>
          <button onClick={()=>setState(prev => prev+1)}>
              Click to Increase
          </button>
        </>
      );
    };
    
    const App = () => {
      return (
        <MyContextProvider>
          <MyContextConsumer />
        </MyContextProvider>
      );
    }

    An important note to stress about this code is that you have in effect two “stores” of information. The information is first stored in the state of a component, and then it is fed to the Context via its Provider. The Consumer component will then get the state combined with a callback as a single object (the ‘value’) from the Context, and use that value as a dependency in its (re-)rendering. Once you understand this fact — that for Context to really be effective you need to couple it with useState (or its alternatives like useReducer) — you will understand why it is often said that Context is not a state-management system, rather, it is a mechanism to inject data into your component tree.

    In summary, in practice, you need to keep conceptual track of the “state” as stored in a near-top-level component that wraps the Provider versus the “state” passed to/from the Context, and onto the Consumer.

    That’s it — if you can follow these concepts as illustrated in the above code, then you have the essential concepts of React Context. Hurray!

    The remainder of this article discusses further important patterns that build off of this knowledge.

    Context with useReducer

    Since Context is often seen as a replacement for redux, one will likely encounter useReducer instead of useState. Like useState, useReducer returns a state and a function to update the state.

    const [state, setState] = useReducer(reducer, initState);

    Unlike useState, the useReducer function takes two arguments. The second argument is the initial state that you wish to keep track of. The first argument is a reducer function that maps a previous state and an action to a new state. The action, as with redux, is an object of the form:

    {
      type: "ACTION_NAME", // Required string or enum entry
      payload: ... // Optional data structure
    }

    By common convention, a reducer function is almost always a switch that returns a new state for different values of action.type. E.g.:

    export const myReducer = (prevState, action) => {
      switch (action.type) {
        case "SET_STH":
          return {
            ...prevState,
            sth: [...action.payload]
          };
    
        case "ADD_STH_ELSE":
          return {
            ...state,
            sthElse: state.sthElse + action.payload
          };
    
        default:
          throw new Error('Unknown action: ' + JSON.stringify(action));
      }
    };

    Notice that, as with redux, we need to always return a new object in our reducer function when we update the state in order for useReducer to trigger re-renders.

    The items returned by useReducer acting on your reducer function and initial state are then placed in an object that is used to set the value of the Context Provider. A wrapped-provider component can thereby be take the following form:

    const { Provider } = MyContext;
    
    export const MyContextProvider = ({ children }) => {
      const [state, setState] = useReducer(myReducer, initState);
      return (
        <Provider value={{state, setState}}>
          {children}
        </Provider>
      );
    };

    By convention, the function returned by useReducer (setState above) is often called ‘dispatch’.

    Context and Memoization

    Another important concept in redux is that of the ‘selector’. Suppose your app needs to track state of the form state: {A:IA, B:IB, C:IC}. Suppose that state gets updated frequently, that you have a component that only depends on state.C, and that you do not want it to re-render when only state.A and/or state.B get updated. As described in this answer, there are three ways that you can improve performance in such a case:

    1. Split your Context so that e.g. state.C is its own separate state
    2. Split the component that depends on C into two components: the first uses useContext to get the new value of C and then passes that as a prop to a separate component wrapped in React.memo
    3. Take the jsx to be returned by the component and wrap it in a function that is itself wrapped in useMemo with C in the array of dependencies.

    You might also consider creating two different Contexts for a single reducer: one to pass through the state, the other to pass through the dispatch function. (This way, any time the state is updated, components that only use the dispatch function — but not the state — will not re-render since the value of their Context never changes.)

    Another pattern I have encountered is to wrap the useReducer with a separate hook that executes more nuanced logic, such as also reading/writing to localstorage, and then using this hook within the Provider component.

    In short, think hard about what state each Context is responsible for, and consider splitting or memoizing to avoid expensive re-renderings.

    Context and Side Effects

    No conversation on state management is complete without considering API calls and ‘side effects’. Often we might want to trigger a series of actions such as fetching data from an API, updating the state, then fetching data from another API, etc.

    One normally supplements redux with a library like redux-sage, redux-thunk or redux-observable. These allow you to set up the triggering of actions or side effects that can trigger yet more actions or side-effects in a potentially endless cascade of events. These “redux middleware” solutions are also nice in that they help keep your centralized state-management logic separate from your component-state logic.

    As far as I can tell, such separation of concerns is not readily accommodated with Context. Instead, you need to intwine the logic to control such cascades of events within your components using useEffect (ultimately).

    For example, suppose that upon loading the site you want to check if a user is logged in and, if so, fetch some data and then, based on that data, decide whether to display a message with data fetched from another API.

    One way to do this is to create a component that will show a login form if the user is not logged in (based on a boolean from the Context value), or an image if the user is logged in. On clicking the submit button for the login form the component executes a fetch to the api and then updates the Context with the returned data. This triggers a rerender of the component, which uses a useEffect listening for changes to the login boolean that issues another fetch to an API, and uses that data to update the Context again. This final update to the Context triggers another rerendering of components that can easily control the display of a message.

    This interplay between components and Context is straightforward enough to understand, though one might be wary of having all of the cascading logic “scattered” around the components with useEffect calls.

    One could imagine trying to foist all of the side-effect logic within one or more custom hooks within the Provider component. I have not tried that yet in any serious way, so may revisit this article in the future after experimenting further.

    My feeling for now though is that trying to cram the equivalent of redux-observable-like logic within hooks within the provider component will result in something rather convoluted. For this reason, I think I understand why it is said that Context is best-suited for small to medium-size apps here one can indeed manage such “scattering” of the logic to maintain the app’s centralized state within the Consumer components. If your app is very large and complicated, then redux-observable might well be the way to go.

  • AWS Production Static Site Setup

    Intro

    I’ve fumbled through the process of setting up a production static site on AWS a few times now. These are my notes for the next time to get through the process faster.

    Overview

    We want to be able to run a local script that wraps around the AWS CLI to upload assets to an AWS S3 Bucket (with user credentials with limited restrictions). The S3 Bucket is to be set up for serving a static-site and serve as the origin of a Cloudfront instance, which is itself aliased to a Route 53 hosted-zone record, all glued together with a ACM certificate.

    Finally, we need a script to copy over the contents of a directory for the static site to S3 in such a way as to compress all image files. In summary:

    • S3 Bucket
    • Cloudfront Instance
    • Certificate Manager Instance
    • Route 53 Configuration
    • AWS User for CLI uploads

    S3 Bucket

    Setting up an S3 Bucket is quite straightforward to accomplish in the AWS GUI Console. When asked about “Block all public access“, just uncheck, and don’t apply checks to any of the sub options. (Everyone I’ve seen just seems to ignore these convoluted suboptions without explanation.)

    Under permissions you need to create a bucket policy that will allow anyone to access objects in the bucket. So copy the ARN for the bucket (e.g. “arn:aws:s3:::rnddotcom-my-site-s3-bucket”) and use the “Policy Generator” interface to generate some JSON text as depicted below.

    Note: under the “Actions” option you need to select just the “GetObject” option. Click “Add Statement” and “Generate Policy” to get the JSON. Copy/paste it into the bucket’s policy text field and save. The following JSON is confirmed to work.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AddPerm",
                "Effect": "Allow",
                "Principal": "*",
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::rnddotcom-site-s3-bucket/*"
            }
        ]
    }

    Next, when you enable “Static website hosting”, you must specify the “Index document” since the S3 servers will not default to index.html.

    Upload Static Files (with gzip compression)

    When developing, I always want to be able to re-upload/re-deploy my software with a script. For that, I use a bash script that wraps around the AWS CLI. You can install it on a Mac with homebrew.

    For an example of such a script, see here in my terraform-aws-modules this repo. For this to work, you need to have AWS credentials for a user with access to this bucket.

    A good practice is to create a user with just enough permissions for the resources you need to access. So go to the AWS IAM console, and create a user with “Programmatic Access”.

    In the permissions step, click on “Attach existing policies directly” and select — in this example — the “AmazonS3FullAccess” policy and click on “Next: Tags”.

    Skip through Tags, create the user, and copy the “Access key ID” and “Secret access key” items to somewhere safe. If you are using the script I shared above, then you can add these items directly to your local .env file. By sourcing the .env file, you give these credentials priority over those stored in ~/.aws/credentials (which is handy if you manage multiple AWS accounts.)

    export AWS_ACCESS_KEY_ID="..."
    export AWS_SECRET_ACCESS_KEY="..."

    Now you can run the above bash script that wraps around the AWS CLI to upload the contents of a local directory. The script also includes logic to pick out image files and compress them before uploading.

    You now have a complete simple http static site, great for development, etc.

    Cloudfront I

    If you need a production site then you need to have SSL encryption (at minimum to look professional), CDN distribution, and a proper domain.

    So next go to Cloudfront in the AWS GUI Console and create a new “Distribution”. There are a lot of options here (CDN’s are complicated things after all), and you just have to go through each one and give it some thought. In most cases, you can just leave the defaults. A few notes are worth making:

    • Grant Read Permissions on Bucket“: No we already set these up
    • Compress Objects Automatically“: Select yes; here is a list of type of file that CloudFront will compress automatically
    • Alternate Domain Names (CNAMEs)“: Leave this blank — sort it out after creating a distribution
    • Default Root Object“: Make sure to set this to index.html
    • Viewer Protocol Policy“: Set this to “Redirect HTTP to HTTPS” (as is my custom)

    SSL Certification

    Now we need to point host name to the CloudFront distribution. Surprisingly, it seems you NEED to have SSL, and to have it setup first for this to happen. So go to ACM and click on “Request a Certificate”. Select “Request a public certificate” and continue.

    Add your host names and click continue. Assuming you have access to the DNS servers, select “DNS Validation” and click ‘next’. Skip over tags and click on “Confirm and Request”.

    The next step will be to prove to AWS ACM that you do indeed control the DNS for the selected hosts you wish to certify. To do this, the AWS console will provide details to create DNS records whose sole purpose will be for ACM to ping in order to validate said control.

    Screenshot

    You can either go to your DNS server console and add CNAME records manually, or, if you’re using Route 53, just click on “Create record in Route 53”, and it will basically do it automatically for you. Soon thereafter, you can expect the ACM entry to turn from “Pending validation” to “Success”.

    Cloudfront II

    Now go back and edit your Cloudfront distribution. Add the hostname to the space “Alternate Domain Names
    (CNAMEs)“, choose “Custom SSL Certificate (example.com)”, and select the certificate that you just requested, and save these changes.

    Route 53

    Finally, go to the hosted zone for your domain in Route 53, and click on “Create Record”. Leave the record type as “A” and toggle the “Alias” switch. This will transform the “Value” field to a drop down menu letting you select “Route traffic to”, in this case, “Alias to Cloudfront distribution”, and then a region, and then in the final drop down you can expect to be able to select the default url to the CloudFront instance (something like “docasfafsads.cloudfront.net”).

    Hit “Create records” and, in theory, you have a working production site.

    NextJs Routing

    If you are using nextJs to generate your static files then you will not be able to navigate straight to a page extension because, I have discovered, the nextJs router will not pass you onto the correct page when you fall back to index.js, as it would if you’re using e.g. react router. There are two solutions to this problem, both expressed here.

    • Add trailing slash to all routes — simple but ugly solution IMO
    • (Preferred) Create a copy of each .html file without the extensions whenever you want to reupload your site; requires extra logic in your bash script

    Trouble Shooting

    • The terraform-launched S3 bucket comes with the setting “List” in the ACL section of permissions tab; it’s not clear to me what difference this makes.
    • I was getting a lot of 504 errors at one point that had me befuddled. I noticed that they would go away if I first tried to access the site with http and then with https. I was saved by this post, and these notes that I was then prompted to find, that brought my attention to a setting that you cannot access in the AWS GUI Console called “Origin Protocol Policy”. Because I originally created the Cloudfront distribution with terraform, which can set this setting, and it set it to “match-viewer”, the Cloudfront servers were trying to communicate with S3 with the same protocol that I was using. So when I tried to view the site with https and got a cache miss on a file, Cloudfront would try to access the origin with https; but S3 doesn’t handle https, so it would fail. When I tried using http, Cloudfront would successfully get the file from S3, so that the next time I tried with https I would get a cache hit. Now, since I don’t like using http in general, and in fact switched to redirecting all http requests to https, I was stuck until I modified my terrafform module to change the value of Origin Protocol Policy to http-only. I do not know what the default value of Origin Protocol Policy is when you create a Cloudfront distribution through the Console — this might be a reason so always start off with terraform.
  • SVG Coordinates Demystified

    Why are SVG coordinates so confusing?

    SVGs are incredibly powerful but, if your experience is anything like mine, you’ve found all the viewBox jazz to be extremely “fiddly” and, thus, hard to master.

    I thought originally that I’d pick up eventually just by playing around with parameters. This proved frustrating because there are a lot of permutations between the following sets of variables:

    1. The SVG’s width and height attributes
    2. The SVG’s style attribute’s width and height properties
    3. The numbers in viewBox="W X Y Z")
    4. The values taken by the preserveAspectRatio attribute.

    There are lots of tutorials out there, but most don’t go to the lengths necessary to avoid ambiguity and confusion on SVG coordinates. They tend to employ jargon that requires you to first understand what all these parameters mean (‘user space’, ‘view port’, ‘view port space’, ‘visible area’, etc.), and/or make ambiguous assertions along the way like “the ‘view port’ is the visible area of the SVG”.​*​

    I’ve therefore taken it upon myself to provide an orderly account of SVG coordinates and, having researched the matter, I can hopefully help you gain clarity.

    Before Continuing

    Be warned, having worked my way through this I’ve come to realize that you have to be prepared to give this subject some serious concentration for probably an hour or two; this stuff just can’t be picked through trial and error. The good news though is that it does make good sense once you get your head around it, and it’s a nice feeling when you get there. SVG’s are an incredible skill to have in your toolkit, so dig out the time to get your foundations in solid order.

    Let’s also begin by making sure our basic HTML terminology is used with precision. If you’re not clear on the following distinctions, then take a moment to check out these links:

    OK, let’s go!

    1. The “View Port”

    The basic way of setting the size of an HTML element as it appears in the browser​†​ is to set the width and height properties of an element’s style attribute (e.g. <div style="width: 100px;"> ... </div>).​‡​

    Alternatively, in the case of a select few HTML elements,​§​ you can achieve the same effect — determining the size of the element as it appears in the browser — by setting dedicated width and height attributes on the element instead (e.g. <svg width="100" height="100"> ... </svg>).​¶​

    Choosing between these two ways of determining how much actual screen space the SVG element occupies is something of a matter of taste.​#​ Just be aware that the values of the style’s width/height properties will trump those of the element’s width/height attributes.

    In the context of SVGs, the rectangle defined within the browser window by setting either the width/height attributes or the style’s width/height properties is referred to as the “view port“. For example, the following declaration:

    <svg width="100" height="100" style="background-color: green;">

    would result in an SVG view port coincident with the green square you’d see in the browser.

    2. View Port Coordinates

    As with any HTML element, the SVG element is associated with a coordinate system whose origin is at the top-left corner, and whose default unit is the pixel. These coordinates could be used to perform standard HTML undertakings, such as the absolute positioning of a child element relative to the upper-left corner of the SVG.

    In the context of SVGs, this same coordinate system is referred to as the ‘view-port coordinate system’ (a.k.a “view-port space”). As explained in more detail below, coordinate system will serve as the default coordinate system for drawing shapes.

    Note that in the absence of width/height attributes or style properties within the SVG, most browsers will allot the SVG element sides of length 300/150 pixels respectively. Or, to put in other words, the default size of the view port is 300 x 150 pixels.

    3. The “View Box”

    Let’s forget about the view port for a second and restart our conceptual journey from. afresh angle. Scalable Vector Graphics (SVGs), as the name implies, are all about encoding your image information using real numbers and geometric constructs (circles, rectangles, etc.). So we’ll need a coordinate system in which we can define these paths, circles, etc., and we’ll call this space the “user space” (a.k.a “user-coordinate space” and “user-coordinate system”).​††​

    Now that we’ve defined two spaces — the browser-realized “view-port space” and this putative “user space” — we begin to sense that the nature of the challenge, and the source of much confusion/ambiguity,​‡‡​ will be to map one to other.

    In the absence of a viewBox property on the SVG tag, the user space and view-port space are identical, with the view port acting as the visible part of those coinciding spaces.

    This is nice and simple but, as you might well sense, we’ll want to be able to customize the mapping between the user space and the view-port space in order to more finely control transformations, such as stretches. This customized mapping between spaces is achieved by two attributes of the SVG: viewBox and the preserveAspectRatio.

    A viewBox attribute defines a rectangular region within user space called the “view box”. In the absence of a preserveAspectRatio attribute, that region will be scaled to fit within the bounds of the view port and then centered.

    It’s import to realize that anything drawn outside the view box can still be visible so long as they fit within the view port. Schematic depictions of these relations are shown in Figure 1, where the user-space coordinate system and the view-port coordinate system are drawn together with coinciding origins.

    Figure 1. User Space Schematics. Top Left: shapes are drawn in user space; the blue box represents a square view port. Top Right: in the absence of a viewBox property, the view-port coordinates and user-space coordinates are the same, so the view port “just is” the visible portion of the user space. Bottom Left: a red view box is defined within user space. Bottom Right: in the absence of a preserveAspectRatio property, the view box is scaled to fit the view port and then centered, thereby determining the visible part of user space.

    Note: in the absence of a width/height attributes, the view port assumes dimensions equal to that of the viewBox (i.e. the SVG just displays the view box exactly). So if you were to declare <svg viewBox="0 0 10000 10000">...</svg> then the SVG element would be rendered in the browser as a 10k x 10k pixel element. For this reason, it is advised to always ensure that the view port is set explicitly.

    4. preserveAspectRatio Attribute

    If the view port and view box have the same aspect ratio, then there is nothing more to do. The contents of the view box will simply be scaled, if necessary, to fit the view port.

    If however the view port and view box rectangles do not have the same aspect ratio, then you can sense that there are going to be various ways that the view box could be stretched and/or positioned within the view port. This is the scenario we shall be considering from hereon, and where the preserveAspectRatio attribute comes into effect.

    When you want the contents of the view box to be stretched to fit the view port, then just set preserveAspectRatio="none". This is similar to setting the css property object-fit: fill; in order to stretch a background image within e.g. a div element.

    As the name implies, any other (valid) value given to this attribute will preserve the aspect ratio of the view-box contents. Such values take a string of the form “X Y” where X specifies the direction with which to shift the view box within the view port (if permitted after scaling), and where Y specifies the type of scaling.

    Let’s start with Y. This can assume one of two possible values: “meet” or “slice”. These two values correspond closely to setting the css properties object-fit: contain; and object-fit: cover; respectively. Meet will scale the view box to fit within the width or length of the view port, and slice will scale the view box to take up the entire view port whilst preserving aspect ratio.

    After scaling the view box to fit the view port in either the up/down or left/right directions, it will be the case that there is freedom to shift the view box in the other direction. This is where the X value comes in. It takes values of the form xMidYMid where “Mid” can be replaced with either “Min” or “Max”, depending on whether you want to shift “negatively” or “positively” along the free direction.

    Conclusion

    Once you’ve wrapped your head around these concepts, you’ll begin to appreciate how hard it would be to figure out these definitions/relations just by playing around with parameters.

    I hope this brought you clarity and lets you enjoy working with SVGs a lot more. If you’re still confused, then I suggest you do what I did: write an article where you research and then try your best to explain these matters to someone else. Docendo discimus.


    1. ​*​
      If that statement doesn’t seem ambiguous to you then either you already understand SVG coordinates or you’re a better person than I.
    2. ​†​
      I.e. to determine the actual number of pixels on the screen that get taken up by the element.
    3. ​‡​
      Alternatively, of course, one can set these styles using css selectors and declarations within a separate style sheet.
    4. ​§​
      These are the “image” elements svg, canvas and img
    5. ​¶​
      The one difference in using width over style.width is that the browser will be able to use the width property to allot space for the element before parsing the css. The motivations for this are somewhat historic in character.
    6. ​#​
      See here for one opinion on the matter: https://stackoverflow.com/a/2414940/8620332
    7. ​**​
      Technically, this default value is also only applied in the absence of the viewBox parameter as described later.
    8. ​††​
      I think it’s referred to as “user space” because the user has (via the viewBox and preserveAspectRation properties) control over where in the infinite plane the shapes are drawn, stretched, etc.
    9. ​‡‡​
      For example you might come across ambiguous statements like “the view port defines the visible area” without specifying what space this “area” is in.