In digital video, raw video frames are usually represented by either of RGB colour space or YUV colour space.
In RGB colour space three colour values are stored separately i.e. red, green and blue. These values can be stored either in order of red, green and blue i.e. RGB or blue, green and red i.e. BGR. Usually each colour is stored using 8 bits and value in range of 0-255 per colour component. This is usually referred as RGB24.
Historically, the term YUV was used for analog encoding. Nowadays this term is frequently used for analog and digital encoding as well. The YUV model defines a colour space in terms of one luminance component (Y) and two chrominance (UV) components. It encodes a colour image or a video frame taking human perception into account. Luminance deals with brightness and chrominance represents colour information. If chrominance components are taken out of a video, we will get a black and white video. The most important component for YUV capture is always the luminance, or Y component. Human eye is unable to notice difference if chrominance samples are reduced to half of luminance samples. The most common YUV format is therefore YUV 4:2:0. 4:2:0 means that the U and V components are sub-sampled at a factor of 2 in the vertical horizontal directions.
Vectorscope displays chrominance or colour information for a video frame. It displays this information on a circular plot. Vectorscope is primarily used to align multiple cameras so that colours of videos match, when shooting with either of them.
From implementation perspective vectorscope displays plot of U & V components of a video frame on x and y axis respectively. U & V components are calculated for each pixel from RGB values of given pixel in a video frame. Normalized values of U & V are calculated. The normalized values are scaled according to width and height of vectorscope view. The scaled values are then plotted on x and y axis of vectorscope view.