Automated Oil Paint MIXER AND DISPENSER
Northeastern University: I, along with a team of four other engineers, developed an oil paint dispensing machine that scans a color and, utilizing a color mixing API, extrudes the correct ratio of primary paints to create the desired color when mixed.

Abstract
Color mixing is a fundamental skill for oil painting, yet difficult to quickly learn. Without this ability, oil painting can become intimidating for new artists. Current solutions utilize liquid pigments, which may not mix well with oil medium, or are only available on an industrial level with company-specific pigments. The Smart Palette is an oil paint dispensing system that uses a linear actuator to extrude pre-mixed oil paint from canisters set in a carousel plate rotated using a Geneva drive. The attached color sensing decide can detect any color it scans in real time. This RGB color is translated CMYK and finally into paint color percentages of 5 common oil paint bases: cadmium red light, cadmium yellow, ultra marine blue, burnt umber, and titanium white. With these percentages, the device can extrude set amounts from each canister creating the custom color. The Smart Palette has a touch screen providing an easy user experience. The user must simply trigger the scan using a screen or navigate to a preselected color from the library. The device then automatically starts to extrude from each paint canister to create the custom color.

Figure 1: Full Assembly of the Smart Pallet
Need
Color mixing is a fundamental skill taught to oil painters, yet it can be extremely difficult to learn. Training the artist’s eye to recognize how much of a specific color to add can take years, making oil painting unapproachable to beginners and a difficult art to teach. Seasoned artists who know how to mix colors will spend time repeatedly recreating colors each time they return to an unfinished painting. There are no consumer products that focus on automatically dispensing oil paints in the correct ratios required to make a chosen color. Being able to see the exact amounts of paint required to mix a color helps students develop an eye for color mixing, making our main audience art teachers. Additionally, automated paint dispensing can reduce the amount of time taken to mix paint by hobbyists and professional artists. We intend to develop an automated color dispensing system that extrudes ratios ratio of red, blue, yellow, white, and dark brown to accurately create a chosen color.
Background and Significant Prior Work
Before beginning our design process, it was fundamental that we understood how oil paint is made, the rheological properties of oil paint, and color systems. Furthermore, we reviewed existing solutions and patents that address combining paint with pigments, dispensing paint and pigments, and color matching.
​
Rheological Properties of Oil Paint
Oil paint is unique from other mediums because it uses oil as a base rather than water as seen in acrylics
and watercolor paints. Additionally, oil paint is a Bingham Plastic – a type of shear thinning non-Newtonian
fluid that has a positive linear relationship between shear stress and shear
strain as seen in Figure 2. Although other mediums such as acrylic paint share
similar properties, the usage of a fat in oil paint gives it a higher viscosity than
water-based paints, giving it a thicker consistency similar to toothpaste [2].
Figure 3 depicts the results of a study by Sanghyuk Lim & Kyung Hyun Ahn [3]
comparing the change in viscosity of oil paint and water paint in relation to
strain. As strain increases, the viscosity of both paints decreases, making it
easier to spread with a paintbrush. Yet the viscosity of the oil paint remains
higher with a linear relationship in comparison to the water paint.
​
​
​
​
Making Oil Paint
Oil paint is traditionally made by crushing dry pigment into an oil medium as displayed in Figure 4. The viscosity of the paint can be increased by adding more dry pigment or lowered by adding more oil. The main techniques for paint making were developed between the 13th and 17th centuries [2]. Historically, water, resins, and other materials were added in the paint-mixing process to create specific effects after the paint dried. In an industrial setting, pigment is mixed into an oil medium using a roller mill, which uses rotating shafts to continuously crush the pigment into the paint.
​
Color Systems
Colors are commonly based on an RGB color model or a CMYK color model, depending on how the color is viewed. RGB color models are the basis for light, which uses red, green, and blue as its primary colors. These can combine to make white, as seen in Figure 4. This property is known as additive color, since the combination of colors increases the amount of light being reflected, resulting in a
maximum of white. The RGB model can be seen on digital displays such as smartphones and TVs. CMYK, on the other hand, uses cyan, magenta, yellow, and black as its primary colors for pigments. When combined, these colors absorb more light, resulting in a maximum color of black as seen in Figure 4. This property is known as subtractive color, which increases the amount of light being absorbed as more color is added. CMYK is typically used in printing to create colors using inks, paints, or dyes. The difference in display between RGB and CMYK can result in colors on a computer looking brighter than they do in print. Companies such as Pantone have created a standardized system known as the Pantone Matching System (PMS) [5] that provides users with labeled color swatches so the color on a digital display can be compared to its printed counterpart.
​
When painting with pigments, hues, or the name of each color seen on the visible light spectrum, are
commonly organized using the Itten color wheel, as shown in Figure 5. In this system, red, yellow, and blue are
the primary colors found in the center, which can be mixed into any other color outside of white [6]. These three
primary colors are mixed to make green, orange, and violet – the secondary colors. The secondary colors are
combined with primary colors to form six tertiary colors, including yellow-orange, red-orange, red-violet,
blue-violet, blue-green, and yellow-green. Colors that are directly across from each other are known as
complimentary colors, which create brown when mixed. When mixing paints, combining red with some yellow
creates magenta. If magenta paint was mixed with a color containing blue or green, the respective
complementary colors, it would add brown and make the color look muddier.
​
To look deeper into color perception, it was essential to consider the human perceptible color
spectrum, specifically CIE 1931 [7] as seen in Figure 5, for our color algorithm to operate properly. To
be able to produce a certain color of paint from a set number of inputs, the algorithm needs to use a
series of differential equations derived from using a two-stream approximation for diffusing light
through a single standard layer of paint. This is commonly referred to as the Kubelka-Munk theory
[11]. The theory, which considers the absorption and remission coefficients of the coating surfaces,
uses the two diffuse light fluxes to calculate what the human eye perceives. This is the basis of our
additive model, as it operates with the understanding that a layer of paint can be applied to a 1” x 1”
area of perfectly white canvas. As seen in Figure 6, the light transmission is a function of layers, but
also the pigment of paint. This introduces a new variable when there is more than one singular paint
pigment, requiring further equations. These calculations are used to produce a “remission function” to obtain the backscattering of light from different paint pigments [12]. This approach is imperative to creating a derivative color source. Meaning, any set number of paint pigments can be used as color inputs in each layer to accurately determine the required amount of each. The proportions of paints required to achieve a desired color is the exact purpose of our color algorithm, as discussed below. This algorithm hinges on the validity of these sets of equations, which is commonplace in many forms of computer art. Invoking K-M paint-mixing requires the conversion of RGB to CMYK, performing your algorithm in this “latent space,” then converting back to RGB [13].
​
Current Solutions
Current solutions for desktop paint mixers such as the Picolor Paint Mixing Device [4] and MESOMIX [5] dispense liquid pigments that can be mixed into a medium and then used. Using a water or ethanol-based pigment runs the risk of thinning an oil medium, which cannot be thickened unless a powdered pigment is added. Therefore, the Smart Palette dispenses pre-mixed oil paints to maintain a consistent viscosity.





Design Solution
The goal of the Smart Palette was to design a device that could recreate any color with oil paint. Specifically, we wanted to be able to detect a color, analyze, and then dispense a specified amount of oil paint from each reservoir (red, blue, yellow, white, and brown) onto a palette to be mixed. To achieve this, the Smart Palette was split into two main components: mechanical and electronic.
Our solution was to create an oil paint dispensing system that had a rotating carousel and linear actuator that extruded pre-mixed paint onto the artist’s palette. This carousel was rotated using a custom Geniva drive and stepper motor. Following CMYK additive color theory and preferred mixing paint among artists, we settled on five base colors to use as paint reservoirs: cadmium red light, cadmium yellow, ultra marine blue, burnt umber, and titanium white.
​
A color sensor with built-in illumination and custom housing was designed to detect the desired color. Additionally, a library of pre-selected colors was available for the user to choose from as an alternative. The RGB color, either from the sensor or manually selected color, was processed through a series of code also known as the color algorithm. This algorithm consisted of converting RGB to Hex code, using K-M theory to determine the closest possible paint color with the given base paint, then calculating ratios of each paint to extrude. With these percentages, the mechanism could automatically dispense each color. To control the system, a graphical user interface, or GUI, was displayed on a small touchscreen. This is where the user could start the color scan or select a color to mix.
​
Mechanical Components
A main component of the system is a large linear actuator - a piston-like device that uses a turning
motor to extend and retract – used to dispense paint. The shaft of the actuator extends until it contacts
the paint tubes, and then extrudes the desired amount of paint. The tip of the actuator is fitted with a
touch sensor, which gives information as to when the actuator makes contact to control the volume of
paint coming out of the tube, increasing accuracy. The actuator is mounted above each paint canister
on the front of our prototype’s frame, as shown in Figure 7.
​
The carousel sub-system makes up the primary mechanism of the prototype. It is responsible for
holding the paint canisters and orienting them so that the correct color is dispensed. This sub-system
will be provided with an input from the color algorithm telling it which color to rotate underneath the
dispensing mechanism. This is accomplished with three main components: a large disc, a Geneva drive,
and a stepper motor; these, accompanied by friction reducing structural support components, allow
the carousel to rotate with ease while remaining controlled and structurally rigid. When extruding, the
paint tube must be directly under the dispensing mechanism, thus, the Geneva drive was chosen as the rotational mechanism. This key feature comes from the drive’s unique ability to maintain precise position of the reactionary cog, accomplished by translating the rotation of the driving cog into discretized partial rotations of the reactionary. Thus, the disc holding the paint tubes can only be rotated in increments of 1/5th of a rotation and is locked into each of these positions until the stepper motor is driven.
​
Electronic Components
The color sensor is a functional component of the system that allows the user to scan any color they
can see and use it to paint. This will help the user learn to replicate the color or allow them to quickly
produce a color they see in their environment that they want to use. Paired with a Pantone book of
thousands of visible colors, this is a powerful tool. As color represented by RGB digitally looks vastly
different than the same color in physical paint form, the necessity for the color sensor with the
Pantone book for selection is essential. This way, the user can view the color they want, without
viewing the distorted color on a screen for selection instead. Without this tool, the user would select
and RGB represented color on a screen. Then even if perfect color is achieved by the system, the user
would be upset with the outcome due to the disparity in appearance from what was represented on
the screen.
The final design for the color sensor was the EZO-RGB color sensor from Atlas Scientific, as seen
below in Figure 11. This sensor offers remarkable accuracy compared to other components on the
market. The sensor had extensive documentation and sample code, easy mounting threads,
embedded high power LEDs, and integrated calibration functionality. The housing for the sensor
utilized the NPT threads that are on the encasing of the sensor. The housing is a drafted cylinder open
on both sides. There exists a lip inside for the sensor to positively engage when the desired distance is
obtained. The outer ring of the housing encompasses without interfering with the 30° field of view. The 120° field of illumination from the embedded LEDs are fully encapsulated within the housing, allowing for light concentration to reach the target color entirely for the sensor to read. This also isolates the sensor readings from ambient light, which would add noise to the data. Standardizing the process in this way is essential.
With the color sensor, an electrically isolating adapter is required for the pins to be compatible with the RaspberryPi. Further, an interlinking component is used to allow the sensor to operate as a serial device, using the UART pins instead of the I2C pins, which are reserved for motor control. Lastly, two 4.7 kΩ pull-up resistors are used to output the expected values from the GPIO pins. An extension cable harness was added to give the user more mobility with the device, which always remains tethered to the system. These components are mounted to the acrylic encasement with proper heat sinking options available for use.
The color sensor then works closely with the color algorithm to convert the readings from 8-bit RGB values to CMYK. This then is altered to a custom CMYK scale that is based on the colors that we have, instead of ideal colors that can directly be calculated. The theory and experimental calibration of this functionality is discussed below in the design process section.


Design Process
The design of the Smart Palette required an interdisciplinary combination of artistic and scientific knowledge. To delegate tasks, the team first outlined all integral research, systems, and processes. Once all tasks were ordered, the team decided to split into two focus groups: the mechanical and electrical teams. The mechanical team was tasked with creating the structure and mechanisms, while the electrical team was responsible for powering all mechanisms and programming them to work together.
Mechanical Design
The mechanical team decided to first focus on a prototype that would be able to execute all the key features of the system. This would allow for proof of concept while only using a fraction of the available resources.
Here, the group split again, delegating individual work on separate components. Design meetings were held weekly to allow collaboration and ensure that each system was on track. The end product was a rigid tower that allowed for the dispensing system to be mounted over a singular paint canister. This initial prototype was essential to the team’s ability to diversify tasks and ensure that the core systems of the final prototype would be functional. It was on this initial prototype that the dispensing sub-system was developed, and primary testing was performed. These tests enlightened the team on many design requirements that would be necessary on the final prototype.
The success of our final product is largely due to key issues that were discovered and fixed on the prototype. Examples include actuator misalignment, paint tube rigidity, and programming errors. The mechanical team worked in tandem with the electrical team to create solutions for these issues, which allowed for easy trouble shooting. The mechanical team didn't need to understand the electronics, so they could focus on issues with their design. The electrical team did the same, focusing only on connections and programming.
​
Electrical Design
The electrical team was smaller than the mechanical, consisting of two people. The team first chooses
a controller board, as this determines a baseline of functionality for all mechanical and electrical
systems. Research focused on two microcontroller boards, the Arduino and RaspberryPi. The
RaspberryPi was chosen because of its versatility in controlling the many motors, sensors, and
programs required for our approach. The RaspberryPi comes equipped with Python3, a coding
language that can be used to communicate between the various elements in our system. An electrical
schematic is displayed below, which gives further insight into the initial approach.
One challenge, however, was that the chosen linear actuator did not have proper documentation. The
12V motors shown in the schematic were 2 wire, and the purchased device was 8 wire. This required
additional motor controllers, and a more complicated system. This could have been avoided by
conducting further research into the website proving the device. All components after this were
ordered under greater scrutiny. Although initially this caused delays in development, the lack of
documentation prompted our group to dissect the device and understand its functionality in terms of
raw power input and motor response. This allowed for very precise control of the amount of paint
dispensed from each tube.
The initially selected method for measuring RGB color was the Adafruit APDS9960 sensor, as it is a promising new generation model with high functionality. The sensor was wired to an independent RaspberryPi and given execution and logic code functionality. The RGB values were determined based on light wave intensity. Therefore, ambient lighting had to be controlled. An embedded LED was added within 3D-printed housing to standardize all value readings. Preliminary testing commenced, using a Pantone Color book with known color values to compare against. Colors across the spectrum were selected for iterative testing with the hope that certain patterns may arise amidst the colors.
​
The most noticeable pattern was how entirely the readings depended on lighting
conditions. Any deviation in lighting to the color target demonstrated a large shift
in color value readings. If the sensor was moved during data collection, the light
refraction would shift, creating unpredictable data. In Figure 10, a single trial of a
deep green can be seen. The shift in the graph demonstrates a movement in the
sensor to demonstrate the temperamental nature of the device. A large part of
the testing was determining a calibration method to hone the color sensor
readings towards the known values. This was done with a series of calibration
factors and interpolation to lower the margin of error. It was useful to see that
RGB values trended in the same direction, but it was frustrating that upon
averaging these values, they still did not yield a reliable output. The margin of
error for our colors under perfect conditions was 15.9% compared to known RGB
values. This was not a low enough value for the use cases of our product and the
team could not find other solutions to increase accuracy without sacrificing other
aspects of the design. The team concluded that such a sensitive sensor was not
viable for commercial use, especially within a classroom setting for example. With the sunk cost fallacy in mind, the team began the search for another color sensor.
​
After some further research, the EZO-RGB color sensor was selected. This sensor took out the
headache that the Adafruit sensor posed with its integrated features. The sensor connection did
require more components to be compatible with the RaspberryPi, as it was geared towards being
used with Arduino, but after purchasing electrically isolating adapters, the sensor was functional. The
RaspberryPi also had to be set to serial mode using the UART pins to monitor user commands and
continuously poll data in the terminal. Thus, the team had to next create a library of functions that
could be called by the touch screen graphical user interface that the user would view. This would
reduce the complexity of having to interact with the RaspberryPi terminal and poll commands.
Housing for the EZO-RGB was 3D printed to thread directly onto the NPT threads on the sensor. The
data was experimentally determined to be most accurate when the sensor was 2.5cm from the target
color, with the embedded LEDs set to 100% power on the 5V pin. Therefore, the housing held the
sensor 2.5cm away from the external surface of the housing on an extension cable harness to give
users more range. This also suppressed the obfuscation of color readings from the sensor’s field of
view. The intention was that the user could use the color sensor to scan any color they desire to paint by pressing the housing up to the color. This measure will create a closed system, where no light except for the embedded LEDs will be used to interpret the target color. Creating a lasting calibration where each reading is standardized and thus directly comparable to the last is paramount for repeatability in the team’s design. Under these settings, the variation in color data was 0.11%, as the readings were remarkably consistent. Given both the sensor upgrade and the standardized conditions with the housing, this margin of uncertainty is satisfactory.
The next sensor design challenge was to convert the obtained RGB values to an
adapted CMYK scale. While RGB is how color is represented digitally, CMYK is
how it is viewed and printed physically. To output CMYK, the team would need
perfectly pure representations of cyan, magenta, yellow, and black paint.
However, ideal conditions do not exist, meaning each color has a considerable
amount of each other present, as the team elected to purchase light red, deep
blue, yellow, and a deep brown, as advised by paint artists. So instead, the color
algorithm attempts to use these limited inputs to create an output as close an
approximation to the desired color as possible. Seen in Figure 12 on the left is a
color comparison using the UI within the algorithm. On the right is the desired
color, entered by the user or the color scanner. On the left is the color that is
closest to this target color given the limited inputs of paints on hand. So, to get the color “Cadillac” our algorithm takes the paints we must make “Matrix,” which is a 99.1% match to the desired color. This match factor is based on the average percent error of each R, G, and B value independently. Further, it can be empirically observed that the concocted color was very close to the desired color. In this example, the color is created by using 53.6% Cadmium Red Light, 21.4% Ultramarine Blue, and 25% Titanium White.
These values are relative to the volume of paint desired, which is a user input value. Meaning, if the user wanted 20mL of a color, the dispenser would output 10.72mL, or 53.6% of Cadmium Red Light paint. Upon outputting the other values, this would result in exactly the color predicted by the algorithm to be a 99.1% match to the desired color. Further, after randomized experimental testing, the color algorithm has an average of 96.8% accuracy for any color across the visible light spectrum. Given the input of only five initial paints, this is a satisfactory result. Details of the color algorithm can be found in Appendix B below.
After solving for the desired volume from each paint cannister, the displacement in the actuator and thus plunger of the cannister can be solved to be directly related to the output volume of paint, as the dimensions of the cylinder create a closed system with the only variable being height. This allows for the RGB value to be directly linked to the actuator displacement after a series of calculations, bridging the user input directly to the mechanical output.
However, this theory does not mitigate all errors in the system. The paint values that are used in the system were gathered empirically and with help from the color sensor which introduces compounding error as the sensor is used to determine both the input and the reference without a way to compare to a known value. Therefore, the team also trained the sensor to interpolate with known data. The team measured out precise quantities of each paint for application to a 1” by 1” white card. This will ensure that the exact quantities of each paint contribution are known. Then, this is done for the full range of colors across many cards. The sensor is then exposed to each one and compared in value to the known quantities. The bias in the sensor shows it is particularly sensitive to red colors, while it is insensitive to blue colors. Green colors demonstrated no significant bias in either direction.
Armed with this information, the team added calibration functionality of their own to the sensor. Factors were applied to each RGB reading to balance out the bias proportional to the magnitude of the reading. Further, this experimental data was logged to a library which can be selected for the artist to be able to print exact colors if scanning was not desirable. These hardcoded proportions increased the accuracy of the sensor and logic model present to increase color matching ability. Lastly, the team also modified the gamma correction value native to the device, which moved the bias of the readings within the CIE 1931 color space. The gamma correction value was slid two dimensionally towards blue values and away from red values in the visible spectral plane.
The amount of detail applied to both the color sensor and color algorithm are paramount to the success of the system. The accuracy of color is the primary selling factor of this design. Therefore, the effort applied here will improve the results discussed below by focusing on the measurable factors of success for the team’s design.




Results
The success of our final prototype can be defined as a fully functioning electromechanical system, color algorithm, graphical user interface, and rigid frame housing all the components. Our paint dispensing system operated as intended, extruding various amounts of paint based on user input. The dispensing mechanism initially output unreliable values; however, by both increasing the rigidity of the system and developing calibration curves for each paint, the error of the system was mitigated. Though the outputs are still not exact, the uncertainty in the system is not visible in the tangible paint mix. The carousel rotated into the correct position and withstood large loads from the actuator.
Further, the actuator and touch sensor assembly correctly functioned in parallel. The touch sensor reliable denoted when the actuator had contacted the plunger of the paint cannister. From there, the actuator code would successfully execute, driving the actuator into the plunger of the paint tube to dispense the desired amount. The color sensor could successfully read any color without the need to recalibrate before use due to the designed standardized housing. The color achieved a 96.8% average match rate for any possible color, which is within the satisfactory range given the input of only five colors. The GUI was user friendly and provided multiple options to select colors, be that the color sensor or color library. The physical LCD touch screen worked and seamlessly communicated with the rest of the system. The rigid aluminum frame did not deform when loads were applied after sufficient testing and reinforcement. Finally, the acrylic siding added a necessary barrier protecting the sensitive electronic components. All these elements worked together harmoniously to develop a tuned system.
​​
​ An additional assessment of the Smart Palette’s success was done by comparing
the desired color to the actual color created. Comparing these colors provided
valuable insight into how well the Smart Palette successfully color matched!
Figure 13 demonstrates the comparison between the target color and the new
color created. This example was selected as a 95.7% match. This match percent is
still satisfactory for printing but possesses enough disparity for it to be empirically
determined as different. Once the percentage match goes beyond this, it can
become imperceptible to the human eye, as the difference is so low. Based on
this result, and many others, the Smart Palette was determined to be successful at
reading and producing the correct color required. In summary, all electrical and mechanical elements worked together as intended, delivering successful color-match results. Though there was error propagation between systems, the output collected was denoted as successful given initial project performance benchmarks.

Summary and impact
Combining methods of color matching with an automated paint dispensing mechanism, the Smart Palette is a valuable tool for oil painters. Seasoned and amateur painters alike can use the device to recreate colors. Artists will be able to easily remix old paint swatches or choose a specific color from a library, bypassing the time and guesswork involved with creating paint colors manually. The Smart Palette will be a great addition to art classrooms, allowing students to create any color they desire with ease. It can serve as a powerful tool for students to help demonstrate the correct amounts of paint to mix as well.
While the initial version of the Smart Palette has the capability to detect and recreate colors, additional features can be added to further enhance the user’s experience. Incorporating a mixing mechanism for the paint, transitioning to a wireless color sensor, and providing an extended color history library with a custom app would provide a more hands off and user-friendly experience. The Smart Palette’s color algorithm can also be transferred to applications with acrylic paint or even expanding to the cosmetic industry with custom color nail lacquer.
Although the color algorithm can calculate quantities of base paint with high precision and accuracy, there are many real-life factors that impact the final color. One major source of the error comes from the assumption that the paint canisters are rigid bodies. In actuality, the plastic material deforms when high loads are applied. Analysis was also completed with the assumption that the oil paint itself was uniform in consistency. However, recent testing showed oil-pigment separation hidden within the containers. Further design analysis can be completed to help combat these major sources of error to increase the accuracy and precision of the Smart Palette.
This Capstone experience has taught us many lessons in preparation for our careers as engineers, whether that be in the aerospace industry or on movie sets! Tough obstacles such as mismatched parts taught us how to utilize the limited rework tools at our disposal for a quick turnaround. Additionally, collaborating on hundreds of lines of code taught us valuable organization and communication skills. For example, frequent messages and clear pseudo code were necessary to minimize bugs and miscommunication. Utilizing Github (version control system) also allowed us to track changes between versions of code effectively!
References
[1] “Hyperconcentrated Flows,” 2023. [Online]. Available:
https://www.hec.usace.army.mil/confluence/rasdocs/rasmuddebris/non-newtonian-technical-reference-manual/classification-of-
non-newtonian-flows/hyperconcentrated-flows
[2] L. de Viguerie, G. Ducouret, F. Lequeux, T. Moutard-Martin, and P. Walter, “Historical evolution of oil painting media: A rheological
study,” vol. 10, no. 7, pp. 612–621, 2009.
[3] S. Lim and K. H. Ahn, “Rheological properties of oil paints and their flow instabilities in blade coating no. 52, pp. 643–659, Jun. 2013.
[Online]. Available: https://link.springer.com/article/10.1007/s00397-013-0717-
3#citeashttps://link.springer.com/article/10.1007/s00397-013-0717-3#citeas
[4] T. Rosi, M. Malgieri, P. Onorato, S. Oss, “What are we looking at when we say magenta? Quantitative measurements of RGB and
CMYK colours with a homemade spectrophotometer,” vol. 37, no. 6, 2016.
[5] “PANTONE® USA | Pantone Color Systems - For Graphic Design,” 2023. [Online]. Available: https://www.pantone.com/color-
[6] S. Pentak and D. A. Laurer, Chapter 13 Color. Boston, MA: Cengage Learning, pp. 256–273, 2016.
[7] CIE, 1931, “CIE 1931 colour-matching functions , 2 degree observer,” International Commission on Illumination (CIE), Vienna, Austria.
[Online]
[8] D. Dahm, “Kubelka1”. 2021. [Online]
[9] A. K. Roy Choudhury, “4 - Instrumental colourant formulation,” Oxford: Woodhead Publishing, 2015, pp. 117–173 [Online]. Available:
https://www.sciencedirect.com/science/article/pii/B9781782423676500046
[8] “Picolor - Any color, anywhere, anytime!,” 05-Jan-2019. [Online]. Available: https://www.kickstarter.com/projects/picolor/picolor-any-
color-anywhere-anytime.
[9] “Ezo-RGBTM embedded color sensor,” Atlas Scientific, https://atlas-scientific.com/probes/color-sensor/
[10] P. Kubelka, “New contributions to the optics of intensely light-scattering materials. part I,” JOSA,
https://doi.org/10.1364/JOSA.38.000448 (accessed Apr. 10, 2024).
[11] F. Technologies, “MESOMIX - Automated Paint Mixing Machine,” [Online]. Available: https://www.instructables.com/MESOMIX-
Automated-Paint-Mixing-Machine/
[12] Sage journals: Your gateway to world-class research journals, https://journals.sagepub.com/ (accessed Apr. 10, 2024).
[13] P. Kubelka, “Errata: New Contributions to the Optics of Intensely Light-Scattering Materials. Part I,” Optica Publishing Group,
https://opg.optica.org/josa/fulltext.cfm?uri=josa-38-12-1067&id=49844 (accessed Apr. 10, 2024).
Appendix: ENgineering Analysis
The nature of the design of the Smart Palette is two-fold, as it is both a mechanical and embedded systems problem. Given the duality of the electromechanical hardware and the intangible coding required, the engineering analysis will show both our iterative coding approach and mechanical analysis approach. While a general engineering analysis may typically be thought of with stress, strain, fatigue, and factors of safety, our team also had to consider the iterative design process of our code and the strategic structuring of our code architecture.
The success of the system is hinged upon color accuracy. However, when considering the data processing from the color sensor through the color algorithm, actuator, and touch sensor to the mechanical dispensing system, the propagation of error is very large. To mitigate this, the color algorithm was carefully crafted by systematically quelling disparity in expected and produced colors. Similarly, the actuated paint dispensing system was correlated to experimentally derived calibration curves. Through the collective error reduction effort, the output of the system became more reliable, accurate, and precise.
Mechanical Components
To achieve the correct proportions between the paints, precise amounts of paint must be dispensed. However, due to inconsistencies between paint tubes, the dispensing mechanism is not inherently precise. The differing thermodynamic properties result in inconsistent compression ratios as well as inconsistent flow rates between the oil paints. As a result, we decided the best course of action was to analyze the individual flow patterns between each of the paints and create calibration curves relating the mass of the dispensed paint and the displacement of the actuator. With these curves, the correct displacement of the actuator can be solved for. This has only been conducted on our primary colors as the remaining two, white and brown, are utilized for the “brightness” of hue.
These curves were derived by telling the linear actuator to move discrete amounts between 1mm and 6mm after contact with the plunger and manually recording the mass dispensed. Three consecutive tests were run for each paint, after which, plots of each test are displayed and a trendline of the data generated. These trendlines are created using a second order polynomial fit to include the initial slow rate of flow and are displayed in Figures 14 - 16. The disparity between these three flow profiles is displayed in Figure 17. While the flow of blue and yellow are similar, there is a highly evident change in the flow of the red paint.​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​



Figures 14, 15, 16 – Mass vs. Displacement for Cadmium Red, Ultramarine Blue, and Cadmium Yellow Light Paint

Figure 17 – Mass Vs. Displacement Trendlines of Each Paint
Electronic Components
The color algorithm uses the K-M, or Kubelka-Munk theory to predict the optimal way to reach a target color, given a known set of inputs. For example, the Smart Palette utilizes five paints, with hex values of E53123, FEDB03, 251D1A, 1D3A7E, EEEBEB for Cadmium Red Light, Cadmium Yellow Light, Burnt Umber, Ultramarine Blue, and Titanium White respectively. These all have corresponding RGB values due to hexadecimal definition. These RGB values are then converted to the CMYK color space using the following code:
# Function to convert RGB to CMYK
def rgb_to_cmyk(r, g, b):
# Normalize the RGB values by dividing by 255
r_normalized = r / 255.0
g_normalized = g / 255.0
b_normalized = b / 255.0
# Convert RGB to CMY
c = 1 - r_normalized
m = 1 - g_normalized
y = 1 - b_normalized
# Convert CMY to CMYK
k = min(c, m, y)
if k != 1: # Prevent division by zero
c = (c - k) / (1 - k)
m = (m - k) / (1 - k)
y = (y - k) / (1 - k)
else:
c = m = y = 0
return c, m, y, k
​
With the CMYK values, the colors can be analyzed in latent space, our K-M algorithm solved for the total remission from our coating surface, being a standard canvas. Each layer has an absorption coefficient, k, and a scattering coefficient, s, which gives the ratio of the absorption and backscatter constants k/s. Then we can find the diffuse reflectance R∞ given as:
​
​
​
where k is the absorption coefficient of the sample ( with λ as the wavelength of color. This then allows the K-M transform of the measured spectroscopic observable related proportional to the absorption coefficient. With these coefficients known, as they are native to the pigments used in each paint value, the proportional required concentration can be solved for. It must be noted that these are just an approximation, given the nonhomogeneous nature of the fluids being used. However, this is acknowledged within the model. The rest of the differential equations and Fourier transformations will lead you to these coefficients and will be ignored for brevity.
After creating the algorithm from these coefficients, the API and UI were used to test the color algorithm against itself. Random hex values were generated to produce a random color to try to match. The algorithm was given the hex values of the paints that we have in our system and was given a resolution of 30 parts per total of each color. Meaning in the ratios derived, the smallest nonzero proportion of paint called for could be . This was done to not request a smaller amount of paint than our actuator could reliably output, as small amounts were unpredictable. The following table shows the target hex value, the optimal color produced with our paints, the percent match of the replication, and the proportions of each paint required to give this replication:
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
Where the proportions are structured as red, yellow, brown, blue, and white. The data seen above gives an average 96.8% match. With preliminary testing, we found adding an additional color, namely any green, brings our average match to a 99.6% match. We then broke this data into the following figures to visualize how accurate our R, G, and B values were individually.​





Figures 18, 19, 20 – Target Color vs. Color Algorithm Determined Color
In the above figures, the axes are the 8-bit data value of each R, G, and B value independently. For example, each axis on the red figure has the potential range of 0 to 256 bits for all possible values of the 8-bit data channel. Here we can notice that there are two relevant values, the R2 value and the slope. The focus is on the slope, as the expectation is it should be exactly 1 if there is a 100% average match. This would mean that the target color is the independent x value, and the algorithm derived color is the dependent y value. Thus, if each color was exactly the target, the x value should match the y value producing a slope of 1. Therefore, our ability to match red values is very accurate, while blue is our least accurate color. Namely, our red has an error of 0.2%, our green has an error of 2%, and our blue has an error of 5.3%.
These gaps in our output ability are restricted by the number of paints we have. As we only have 5 paints, it is to be expected that every color cannot be created within the scope of our design. As stated before, when adding an additional paint to bring our total to 6 colors, these errors dropped drastically. Given the limitations of the system an average match of 96.8% is within our success goal for the design. To be able to produce any color within 96.8% limits the error from the algorithm and is a satisfactory result from our color theory generation.