MEASUREMENT SCIENCE REVIEW            Volume 17       

Main Page


No. 1

No. 2 No. 3 No. 4 No. 5 No. 6  

  Measurement of Physical Quantities



Darko Brodić, Alessia Amelio: 

Range Detection of the Extremely Low-Frequency Magnetic Field Produced by Laptop’s AC Adapter


Human exposure to extremely low frequency magnetic field represents a risk to their health. This paper takes into consideration the level of an extremely low-frequency magnetic field between 30 and 300 Hz emitted by an AC laptop adapter. The experiment consists of testing 17 different AC adapters for laptops. During the testing, laptops are operated in a normal operating conditions as well as under heavy load. The magnetic field measurement is conducted in the area around the AC adapter. Obtained data is evaluated according to the critical level of the magnetic field proposed by safety standards. Furthermore, data is classified by a K-medians method in order to determine the critical levels of the magnetic field exposure in the nearby area of the AC adapter. Obtained classifications are evaluated according to safety standards, giving a critical analysis of magnetic field areas at risk. Due to emission of a very strong magnetic field in certain areas, a recommendation for safety use of the AC adapter is proposed.



Leonard Klaus:

Comparison of Two Experiments Based on a Physical and a Torsion Pendulum to Determine the Mass Moment of Inertia Including Measurement Uncertainties


To determine the mass-moment-of-inertia properties of devices under test with particularly small mass moments of inertia (some 10-4 kgm2), two measurement set-ups based on different measurement principles were developed. One set-up is based on a physical pendulum, the second set-up incorporates a torsion pendulum. Both measurement set-ups and their measurement principles are described in detail, including the chosen data acquisition and analysis. Measurement uncertainty estimations according to the Guide to the Expression of Uncertainty in Measurement (GUM) were carried out for both set-ups by applying Monte Carlo simulations. Both set-ups were compared using the same three devices under test. For each measurement result, the measurement uncertainties were estimated. The measurement results are compared in terms of consistency and the resulting measurement uncertainties. For the given devices under test, the torsion pendulum set-up gave results with smaller measurement uncertainties compared to the set-up incorporating a physical pendulum.



R. Kořínek, J. Mikulka, J. Hřib, J. Hudec, L. Havel, K. Bartušek:

Characterization of the Embryogenic Tissue of the Norway Spruce Including a Transition Layer between the Tissue and the Culture Medium by Magnetic Resonance Imaging


The paper describes the visualization of the cells (ESEs) and mucilage (ECMSN) in an embryogenic tissue via magnetic resonance imaging (MRI) relaxometry measurement combined with the subsequent multi-parametric segmentation. The computed relaxometry maps T1 and T2 show a thin layer (transition layer) between the culture medium and the embryogenic tissue. The ESEs, mucilage, and transition layer differ in their relaxation times T1 and T2; thus, these times can be used to characterize the individual parts within the embryogenic tissue. The observed mean values of the relaxation times T1 and T2 of the ESEs, mucilage, and transition layer are as follows: 1469 ± 324 and 53 ± 10 ms, 1784 ± 124 and 74 ± 8 ms, 929 ± 164 and 32 ± 4.7 ms, respectively. The multi-parametric segmentation exploiting the T1 and T2 relaxation times as a classifier shows the distribution of the ESEs and mucilage within the embryogenic tissue. The discussed T1 and T2 indicators can be utilized to characterize both the growth-related changes in an embryogenic tissue and the effect of biotic/abiotic stresses, thus potentially becoming a distinctive indicator of the state of any examined embryogenic tissue.



Z. Roubal, K. Bartušek, Z. Szabó, P. Drexler, J. Überhuberová:

Measuring Light Air Ions in a Speleotherapeutic Cave


The paper deals with a methodology proposed for measuring the concentration of air ions in the environment of speleotherapeutic caves, and with the implementation of the AK-UTEE-v2 ionmeter. Speleotherapy, in the context of its general definition, is the medical therapy that utilizes the climate of selected caves to treat patients with health problems such as asthma. These spaces are characterized by the presence of high air humidity and they make extreme demands on the execution of the measuring device, the Gerdien tube (GT in the following) in particular, and on the amplifier electronics. The result is an automated measuring system using a GT with low-volume air flow, enabling long-term measuring of air ion concentration and determination of spectral ion characteristics. Interesting from the instrumentation viewpoint are the GT design, active shielding, and execution of the electrometric amplifier. A specific method for the calculation of spectral ion characteristics and the mode of automatic calibration were proposed and a procedure of automatic measurement in the absence of attendants was set up. The measuring system is designed for studying and long-term monitoring of the concentration of light negative ions in dependence on climatic conditions and on the mobility of ions occurring in the cave.



M. Hanzelka, J. Dan, M. Šlepecky, V. Holcner, P. Dohnal, R. Kadlec:

An Experiment to Prove the Effect of Low-Level Magnetic Fields Resulting from Ionospheric Changes on Humans


The investigation presented in the paper was performed in the laboratories of the Department of Theoretical and Experimental Electrical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, between April 22 and June 26, 2014. We examined a homogeneous sample of male and female participants comprising a total of 49 persons aged 19 to 26. The time required for the measurement of psychophysiological parameters corresponded to 19 minutes, encompassing five stages: Basic (5 mins.), Color (2 mins.), Rest (5 mins.), Math (2 mins.), and Rest (5 mins.). All the measuring cycles were carried out using a BioGraph Infiniti device (Thought Technology, Ltd.). Generally, the impact of the environment upon living organisms constitutes a crucial problem examined by today’s science. In this context, the present article describes the results of an investigation focused on ionosphere parameter variation and its role in the basic function of the nervous system. The discussed research concentrates on the measurement and detection of changes in the region of very low electromagnetic field frequencies; the authors introduce and verify related theoretical and experimental procedures to define the effects that influence brain activity and the cardiovascular system.



Shen Ting-ao, Li Hua-nan, Zhang Qi-xin, Li Ming: 

A Novel Adaptive Frequency Estimation Algorithm Based on Interpolation FFT and Improved Adaptive Notch Filter


The convergence rate and the continuous tracking precision are two main problems of the existing adaptive notch filter (ANF) for frequency tracking. To solve the problems, the frequency is detected by interpolation FFT at first, which aims to overcome the convergence rate of the ANF. Then, referring to the idea of negative feedback, an evaluation factor is designed to monitor the ANF parameters and realize continuously high frequency tracking accuracy. According to the principle, a novel adaptive frequency estimation algorithm based on interpolation FFT and improved ANF is put forward. Its basic idea, specific measures and implementation steps are described in detail. The proposed algorithm obtains a fast estimation of the signal frequency, higher accuracy and better universality qualities. Simulation results verified the superiority and validity of the proposed algorithm when compared with original algorithms.




No. 2  

  Measurement of Physical Quantities


Ondřej Zavila, Tomáš Blejchař:

Capacities and Limitations of Wind Tunnel Physical Experiments on Motion and Dispersion of Different Density Gas Pollutants


The article focuses on the analysis of the possibilities to model motion and dispersion of plumes of different density gas pollutants in low-speed wind tunnels based on the application of physical similarity criteria, in this case the Froude number. The analysis of the physical nature of the modeled process by the Froude number is focused on the influence of air flow velocity, gas pollutant density and model scale. This gives an idea of limitations for this type of physical experiments in relation to the modeled real phenomena. The resulting statements and logical links are exemplified by a CFD numerical simulation of a given task calculated in ANSYS Fluent software.



Idir Mellal, Mourad Laghrouche, Hung Tien Bui:

Field Programmable Gate Array (FPGA) Respiratory Monitoring System Using a Flow Microsensor and an Accelerometer


This paper describes a non-invasive system for respiratory monitoring using a Micro Electro Mechanical Systems (MEMS) flow sensor and an IMU (Inertial Measurement Unit) accelerometer. The designed system is intended to be wearable and used in a hospital or at home to assist people with respiratory disorders. To ensure the accuracy of our system, we proposed a calibration method based on ANN (Artificial Neural Network) to compensate the temperature drift of the silicon flow sensor. The sigmoid activation functions used in the ANN model were computed with the CORDIC (COordinate Rotation DIgital Computer) algorithm. This algorithm was also used to estimate the tilt angle in body position. The design was implemented on reconfigurable platform FPGA.



Jiang Meng, Zhipeng Liu, Kun An, Meini Yuan:

Simulation and Optimization of Throttle Flowmeter with Inner-Outer Tube Element


In order to solve the dilemma between the smaller pressure loss and the larger flow measurement signal in traditional throttle flowmeters, a throttle structure with the inner-outer tube was designed and analyzed. The mathematical relationship model deduced from hydrodynamics showed there were three major parameters to determine the designed throttle structure. Furthermore, the optimal results were achieved by combining orthogonal test design and computational fluid dynamics by taking the ratio of differential pressure of inner-outer tube divided by that of anterior-posterior tube as the optimization goal. Finally, the simulation results with the best level parameters showed that the differential pressure of the anterior-posterior throttle could remain not only the smaller value among other parameters with the same structure of inner-outer tube. On the other hand, it was about one order magnitude less than differential pressure of V-cone flowmeter in the similar installation conditions with the flow velocity varying from 0.5 to 3.0 m/s. The designed inner-outer tube flowmeter can not only save manufacture costs, but also avoid the large sensitivity of pressure sensors, which may lead to a broader application in chemical and petrochemical enterprises.



Pyung Soo Kim:

A Design of Finite Memory Residual Generation Filter for Sensor Fault Detection


In the current paper, a residual generation filter with finite memory structure is proposed for sensor fault detection. The proposed finite memory residual generation filter provides the residual by real-time filtering of fault vector using only the most recent finite measurements and inputs on the window. It is shown that the residual given by the proposed residual generation filter provides the exact fault for noise-free systems. The proposed residual generation filter is specified to the digital filter structure for the amenability to hardware implementation. Finally, to illustrate the capability of the proposed residual generation filter, extensive simulations are performed for the discretized DC motor system with two types of sensor faults, incipient soft bias-type fault and abrupt bias-type fault. In particular, according to diverse noise levels and windows lengths, meaningful simulation results are given for the abrupt bias-type fault.



Dan Zhang, Bin Wei:

Interactions and Optimizations Analysis between Stiffness and Workspace of 3-UPU Robotic Mechanism


The interactions between stiffness and workspace performances are studied. The stiffness in x, y and z directions as well as the workspace of a 3-UPU mechanism are studied and optimized. The stiffness of the robotic system in every single moveable direction is measured and analyzed, and it is observed that in the case where one tries to make the x and y translational stiffness larger, the z directional stiffness will be reduced, i.e. the x and y translational stiffness contradicts with the one in z direction. Subsequently, the objective functions for the summation of the x and y translational stiffness and z directional stiffness are established and they are being optimized simultaneously. However, we later found that these two objectives are not in the same scale; a normalization of the objectives is thus taken into consideration. Meanwhile, the robotic system’s workspace is studied and optimized. Through comparing the stiffness landscape and the workspace volume landscape, it is also observed that the z translational stiffness shows the same changing tendency with the workspace volume’s changing tendency while the x and y translational stiffness shows the opposite changing tendency compared to the workspace volume’s. Via employing the Pareto front theory and differential evolution, the summation of the x and y translational stiffness and the volume of the workspace are being simultaneously optimized. Finally, the mechanism is employed to synthesize an exercise-walking machine for stroke patients.



Kamil Sidor, Anna Szlachta:

The Impact of the Implementation of Edge Detection Methods on the Accuracy of Automatic Voltage Reading


The article presents the impact of the edge detection method in the image analysis on the reading accuracy of the measured value. In order to ensure the automatic reading of the measured value by an analog meter, a standard webcam and the LabVIEW programme were applied. NI Vision Development tools were used. The Hough transform was used to detect the indicator. The programme output was compared during the application of several methods of edge detection. Those included: the Prewitt operator, the Roberts cross, the Sobel operator and the Canny edge detector. The image analysis was made for an analog meter indicator with the above-mentioned methods, and the results of that analysis were compared with each other and presented.




No. 3  

  Measurement of Physical Quantities


David Matoušek, Jiří Hospodka, Ondřej Šubrt:

New Discrete Fibonacci Charge Pump Design, Evaluation and Measurement


This paper focuses on the practical aspects of the realisation of Dickson and Fibonacci charge pumps. Standard Dickson charge pump circuit solution and new Fibonacci charge pump implementation are compared. Both charge pumps were designed and then evaluated by LTspice XVII simulations and realised in a discrete form on printed circuit board (PCB). Finally, the key parameters as the output voltage, efficiency, rise time, variable power supply and clock frequency effects were measured.



Rudolf Palenčár, Peter Sopkuliak, Jakub Palenčár, Stanislav Ďuriš, Emil Suroviak, Martin Halaj:

Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer


Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.



Yi Ji, Shanlin Sun, Hong-Bo Xie:

Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification


Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.



Wojciech Sawczuk:

The Application of Vibration Accelerations in the Assessment of Average Friction Coefficient of a Railway Brake Disc


Due to their wide range of friction characteristics resulting from the application of different friction materials and good heat dissipation conditions, railway disc brakes have long replaced block brakes in many rail vehicles. A block brake still remains in use, however, in low speed cargo trains. The paper presents the assessment of the braking process through the analysis of vibrations generated by the components of the brake system during braking. It presents a possibility of a wider application of vibroacoustic diagnostics (VA), which aside from the assessment of technical conditions (wear of brake pads) also enables the determination of the changes of the average friction coefficient as a function of the braking onset speed. Vibration signals of XYZ were measured and analyzed. The analysis of the results has shown that there is a relation between the values of the point measures and the wear of the brake pads.



Gaiyun He, Can Huang, Longzhen Guo, Guangming Sun,  Dawei Zhang:

Identification and Adjustment of Guide Rail Geometric Errors Based on BP Neural Network


The relative positions between the four slide blocks vary with the movement of the table due to the geometric errors of the guide rail. Consequently, the additional load on the slide blocks is increased. A new method of error measurement and identification by using a self-designed stress test plate was presented. BP neural network model was used to establish the mapping between the stress of key measurement points on the test plate and the displacements of slide blocks. By measuring the stress, the relative displacements of slide blocks were obtained, from which the geometric errors of the guide rails were converted. Firstly, the finite element model was built to find the key measurement points of the test plate. Then the BP neural network was trained by using the samples extracted from the finite element model. The stress at the key measurement points were taken as the input and the relative displacements of the slide blocks were taken as the output. Finally, the geometric errors of the two guide rails were obtained according to the measured stress. The results show that the maximum difference between the measured geometric errors and the output of BP neural network was 5 μm. Therefore, the correctness and feasibility of the method were verified.



Jun-Bao Li, Jing Liu, Jeng-Shyang Pan, Hongxun Yao:

Magnetic Resonance Superresolution Imaging Measurement with Dictionary-optimized Sparse Learning


Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.




No. 4  

  Theoretical Problems of Measurement


Zoltan Domotor:

Torsor Theory of Physical Quantities and their Measurement    -  Invited paper


The principal objective of this paper is to provide a torsor theory of physical quantities and basic operations thereon. Torsors are introduced in a bottom-up fashion as actions of scale transformation groups on spaces of unitized quantities. In contrast, the shortcomings of other accounts of quantities that proceed in a top-down axiomatic manner are also discussed. In this paper, quantities are presented as dual counterparts of physical states. States serve as truth-makers of metrological statements about quantity values and are crucial in specifying alternative measurement units for base quantities. For illustration and ease of presentation, the classical notions of length, time, and instantaneous velocity are used as primordial examples. It is shown how torsors provide an effective description of the structure of quantities, systems of quantities, and transformations between them. Using the torsor framework, time-dependent quantities and their unitized derivatives are also investigated. Lastly, the torsor apparatus is applied to deterministic measurement of quantities.


  Measurement of Physical Quantities


Břetislav Ševčík, Lubomír Brančík, Michal Kubíček:

Optimized Signaling Method for High-Speed Transmission Channels with Higher Order Transfer Function


In this paper, the selected results from testing of optimized CMOS friendly signaling method for high-speed communications over cables and printed circuit boards (PCBs) are presented and discussed. The proposed signaling scheme uses modified concept of pulse width modulated (PWM) signal which enables to better equalize significant channel losses during data high-speed transmission. Thus, the very effective signaling method to overcome losses in transmission channels with higher order transfer function, typical for long cables and multilayer PCBs, is clearly analyzed in the time and frequency domain. Experimental results of the measurements include the performance comparison of conventional PWM scheme and clearly show the great potential of the modified signaling method for use in low power CMOS friendly equalization circuits, commonly considered in modern communication standards as PCI-Express, SATA or in Multi-gigabit SerDes interconnects.



Vimal Kumar Pathak, Sagar Kumar, Chitresh Nayak, NRNV Gowripathi Rao:

Evaluating Geometric Characteristics of Planar Surfaces using Improved Particle Swarm Optimization


This paper presents a modified particle swarm optimization (MPSO) algorithm for the evaluation of geometric characteristics defining form and function of planar surfaces. The geometric features of planar surfaces are decomposed into four components; namely straightness, flatness, perpendicularity, and parallelism. A non-linear minimum zone objective function is formulated mathematically for each planar surface geometric characteristic. Finally, the result of the proposed method is compared with previous work on the same problem and with other nature inspired algorithms. The results demonstrate that the proposed MPSO algorithm is more efficient and accurate in comparison to other algorithms and is well suited for effective and accurate evaluation of planar surface characteristics.



Peili Yin, Jianhua Wang, Chunxia Lu:

Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument


Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI). The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI) to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.



José Lopes, Jorge Rocha, Luís Redondo, João Cruz:

Particle Accelerator Focus Automation


The Laboratório de Aceleradores e Tecnologias de Radiação (LATR) at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST) has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+) and proton (H+) beams of some µA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.  




No. 5  

  Measurement of Physical Quantities


Evgeny Sysoev, Sergey Kosolobov, Rodion Kulikov, Alexander Latyshev, Sergey Sitnikov, Ignat Vykhristyuk:

Interferometric Surface Relief Measurements with Subnano/Picometer Height Resolution


We present an optical interference system nanoprofiler MNP-1 designed for high-precision noncontact measurement of surface relief with subnanometer resolution (root mean square of measured values), based on partial scanning of interference signal. The paper describes the construction of the measurement system with Linnik interferometer and the algorithm for nanorelief surface reconstruction. Experimental measurement results of silicon sample with profile height of surface structure of one interatomic distance obtained by MNP-1 are shown. It was proposed to use an atomically smooth surface as the reference mirror in the interferometer MNP-1 that allowed us to measure monatomic steps of the presented silicon sample. Monatomic steps of 0.31 nm in height on silicon (111) surface were measured with resolution up to 5 pm.



Keheng Zhu, Xiaohui Jiang, Liang Chen, Haolin Li:

Performance Degradation Assessment of Rolling Element Bearings using Improved Fuzzy Entropy


Rolling element bearings are an important unit in the rotating machines, and their performance degradation assessment is the basis of condition-based maintenance. Targeting the non-linear dynamic characteristics of faulty signals of rolling element bearings, a bearing performance degradation assessment approach based on improved fuzzy entropy (FuzzyEn) is proposed in this paper. FuzzyEn has less dependence on data length and achieves more freedom of parameter selection and more robustness to noise. However, it neglects the global trend of the signal when calculating similarity degree of two vectors, and thus cannot reflect the running state of the rolling element bearings accurately. Based on this consideration, the algorithm of FuzzyEn is improved in this paper and the improved FuzzyEn is utilized as an indicator for bearing performance degradation evaluation. The vibration data from run-to-failure test of rolling element bearings are used to validate the proposed method. The experimental results demonstrate that, compared with the traditional kurtosis and root mean square, the proposed method can detect the incipient fault in advance and can reflect the whole performance degradation process more clearly.



Hongli Li, Xiaohuai Chen, Yinbao Cheng, Houde Liu, Hanbin Wang, Zhenying Cheng, Hongtao Wang:

Uncertainty Modeling and Evaluation of CMM Task Oriented Measurement Based on SVCMM


Due to the variety of measurement tasks and the complexity of the errors of coordinate measuring machine (CMM), it is very difficult to reasonably evaluate the uncertainty of the measurement results of CMM. It has limited the application of CMM. Task oriented uncertainty evaluation has become a difficult problem to be solved. Taking dimension measurement as an example, this paper puts forward a practical method of uncertainty modeling and evaluation of CMM task oriented measurement (called SVCMM method). This method makes full use of the CMM acceptance or reinspection report and the Monte Carlo computer simulation method (MCM). The evaluation example is presented, and the results are evaluated by the traditional method given in GUM and the proposed method, respectively. The SVCMM method is verified to be feasible and practical. It can help CMM users to conveniently complete the measurement uncertainty evaluation through a single measurement cycle.



Przemysław Otomański, Grzegorz Wiczyński, Bartosz Zając:

Flicker Vision of Selected Light Sources


The results of the laboratory research concerning a dependence of flicker vision on voltage fluctuations are presented in the paper. The research was realized on a designed measuring stand, which included an examined light source, a voltage generator with amplitude modulation supplying the light source and a positioning system of the observer with respect to the observed surface. In this research, the following light sources were used: one incandescent lamp and four LED luminaires by different producers. The research results formulate a conclusion concerning the description of the influence of voltage fluctuations on flicker viewing for selected light sources. The research results indicate that LED luminaires are less susceptible to voltage fluctuations than incandescent bulbs and that flicker vision strongly depends on the type of LED source.



Jelena Jovanović, Dragan Denić, Uglješa Jovanović:

An Improved Linearization Circuit used for Optical Rotary Encoders


Optical rotary encoders generate nonlinear sine and cosine signals in response to a change of angular position that is being measured. Due to the nonlinear shape of encoder output signals, encoder sensitivity to very small changes of angular position is low, causing a poor measurement accuracy level. To improve the optical encoder sensitivity and to increase its accuracy, an improved linearization circuit based on pseudo-linear signal generation and its further linearization with the two-stage piecewise linear analog-to-digital converter is presented in this paper. The proposed linearization circuit is composed of a mixed-signal circuit, which generates analog pseudo-linear signal and determines the first four bits of the final digital result, and the two-stage piecewise linear analog-to-digital converter, which performs simultaneous linearization and digitalization of the pseudo-linear signal. As a result, the maximal value of the absolute measurement error equals to 3.77168·10-5 [rad] (0.00216°) over the full measurement range of 2π [rad].




No. 6  

  Measurement of Physical Quantities


Jana Jablonská, Miroslav Mahdal, Milada Kozubková:

Spectral Analysis of Pressure, Noise and Vibration Velocity Measurement in Cavitation


The article deals with experimental investigation of water cavitation in the convergent-divergent nozzle of rectangular cross-section. In practice, a quick and simple determination of cavitation is essential, especially if it is basic cavitation or cavitation generated additionally by the air being sucked. Air influences the formation, development and size of the cavity area in hydraulic elements. Removal or reduction of the cavity area is possible by structural changes of the element. In case of the cavitation with the suction air, it is necessary to find the source of the air and seal it. The pressure gradient, the flow, the oxygen content in the tank, and hence the air dissolved in the water, the air flow rate, the noise intensity and the vibration velocity on the nozzle wall were measured on laboratory equipment. From the selected measurements the frequency spectrum of the variation of the water flow of the cavity with cavitation without air saturation and with air saturation was compared and evaluated.



Jiří Přibil, Anna Přibilová, Ivan Frollo:

Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment


The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.



Jie Chen, Jie Liu, Xingrui Wang, Longfei Zhang, Xiao Deng, Xinbin Cheng, Tongbao Li:

Optimization of Nano-Grating Pitch Evaluation Method Based on Line Edge Roughness Analysis


Pitch uncertainty and line edge roughness are among the critical quality attributes of a pitch standard and normally the analyses of these two parameters are separate. The analysis of self-traceable Cr atom lithography nano-gratings shows a positive relevance and sensitivity between LER and evaluated standard deviation of pitch. Therefore, LER can be used as an aided pre-evaluation parameter for the pitch calculation method, such as the gravity center method or the zero-crossing points method. The optimization of the nano-grating evaluation method helps to obtain the accurate pitch value with fewer measurements and provide a comprehensive characterization of pitch standards.



Igor Zakharov, Pavel Neyezhmakov, Olesia Botsiura:

Verification of the Indicating Measuring Instruments Taking into Account their Instrumental Measurement Uncertainty


The specific features of the measuring instruments verification based on the results of their calibration are considered. It is noted that, in contrast to the verification procedure used in the legal metrology, the verification procedure for calibrated measuring instruments has to take into account the uncertainty of measurements into account. In this regard, a large number of measuring instruments, considered as those that are in compliance after verification in the legal metrology, turns out to be not in compliance after calibration. In this case, it is necessary to evaluate the probability of compliance of indicating measuring instruments. The procedure of compliance probability determination on the basis of the Monte Carlo method is considered. An example of calibration of a Vernier caliper is given.



Dalibor Martisek, Jana Prochazkova:

The Enhancement of 3D Scans Depth Resolution Obtained by Confocal Scanning of Porous Materials


The 3D reconstruction of simple structured materials using a confocal microscope is widely used in many different areas including civil engineering. Nonetheless, scans of porous materials such as concrete or cement paste are highly problematic. The well-known problem of these scans is low depth resolution in comparison to the horizontal and vertical resolution. The degradation of the image depth resolution is caused by systematic errors and especially by different random events.  Our method is focused on the elimination of such random events, mainly the additive noise.  We use an averaging method based on the Lindeberg–Lévy theorem that improves the final depth resolution to a level comparable with horizontal and vertical resolution. Moreover, using the least square method, we also precisely determine the limit value of a depth resolution. Therefore, we can continuously evaluate the difference between current resolution and the optimal one. This substantially simplifies the scanning process because the operator can easily determine the required number of scans.



Roman Stryczek:

Alternative Methods for Estimating Plane Parameters Based on a Point Cloud


Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.





 No. 1    No. 2    No. 3   No. 4   No. 5   No. 6

Journal is open for your papers

 Download and print the front cover  ->>