Evaluating the Fun Toolkit in Eliciting Children’s Learning Experience
The study investigates evaluation of the Fun Toolkit tool in eliciting learning experience among children. The tool is considered to be an evaluation method that may be used to elicit one’s experience to provide a user interface redesign for children learning games. The paper presents the Fun Toolkit tool as a survey method which may be used to get assistance in order to gather opinions about technology from children. The research provides a look at several studies which evaluate Fun Toolkit and discuss its usefulness. Smileyometer, Funsorter and Again-Again table described as tools which are also used to evaluate children’s learning experience.
It has been emphasized that the investigation of the preferable aspects of a learning interface is vital. Fun Toolkit has been used as a method which has been tested and validated with children. It includes the four tool set which helps to elicit information from children. The survey was chosen to be the most appropriate qualitative research method for the analysis. The data was received from surveys. With the help of implementing secondary data analysis, the researcher analyzes data collected by others to address posing questions.
20 children aged between 6 and 8 years from a Primary School in Hertfordshire, England were chosen for a survey. They covered the normal range of ability to participate in the study. Some of the children needed help with reading the questions and instructions. It has been noted that the Fun Toolkit method is specifically designed for children on the basis of their skills and needs to collect opinions on their interactive technology experiences. However, the tool has some disadvantages due to the limited data it provides and is the most effective only in case it is used with other evaluating methods including interviewing and surveying. It is also mentioned that Fun Toolkit provides a more general approach which is used to address children. Some areas for further research have been identified.
Keywords: Fun Toolkit, evaluating method, survey, learning experience
Evaluating the Fun Toolkit in Eliciting Children's Learning Experience
The study examines using the Fun Toolkit as an evaluation method to elicit users’ experiences to re-design a user interface for a learning game for children of 6 to 10 years of age. In human-computer interaction (HCI) users have been known to be involved in the design process (Reed et al. 2012). However, developmental methods in generating users’ experiences are mostly focused on adults (reference), often making the process of obtaining and evaluating the opinions from children challenging.
The proposed study focuses on investigating the preferable aspects of a learning game user interface as children’s opinions are vital in their role as application consumers including educational games. An evaluation method such as the Fun Toolkit, which has been tested and validated with children, will be used for this particular study. The Fun Toolkit consists of four sets of tools for eliciting information from children. The research should study in detail all tools within the method. The study will investigate the usefulness and feasibility of the Fun Toolkit with children in the user experience evaluation assessing its capabilities of providing quantitative research methods which enable evaluation to facilitate the re-design of a mobile math application.
Technology is rapidly changing, and the mobile phone industry has become among the first to offer technology to children. Use of a mobile phone makes learning easier and faster to access (Leichtenstern et al. …, p. 38). Companies like Apple have unveiled mobile technology with the intention of catching people’s attention. The games provided to children through these devices can either be suitable for creativity, education or playing. One can select the game depending on the type of his phone. The applications are colorful and can also be three dimensional, which will attract the child allowing him/her to play more and learn more.
There have been some concerns with gathering users' experiences among children. Fun Toolkit was developed by Read and McFarlane and reported as V1 (volume one), which was later improved in 2001 and 2006 respectively. The toolkit is used to pass opinion from the children, and the process is intended to be fast, fair and fun experience for them. A Fun Toolkit has four special tools that are used in collecting information from the children these are: Smileyometer, a Fun sorter, and Again-Again Table and a Funometer. This method is used to measure fun with 5 to 10 years old children. Fun Toolkit consists of several tools, which measure the three fun dimension expectations and engagement.
Smileyometer is a visual and analogue scale with a rating of 1 as awful and 5 as brilliant (Gavin et al. …., p. 383). This tool can be used both before and after the children have interacted with technology. The importance of using it before is for one to be able to measure anticipation by the children and afterwards is to report on the acquired experience of using the tool. The tool is being widely used because it can easily measure satisfaction and can be easily used by children without any assistance (Gavin et al.n.d. p. 122).
The Funsorter is a tool that is often used by the children to rate the game to a number. Rating is used after the game in case one would like to know the outcome. The child would rank based on different constructs, choosing what he/she saw to be best and what was worst rated (Gavin et al. …. p. 383). This can be used to compare the two games, by identifying the easiest game to play or the most enjoyable one.
The Again-Again table is a table that is used to measure experience of each activity. The children are required to either select ‘yes’, ‘no’, or ‘maybe’. The children will be given a table with the games they have played and asked questions like “Would you consider buying this game if presented to you?” (Gavin et al. ….., p. 388).
According to Read and MacFarlane, the Funometer “found that the Funometer and the Smileyometer were essentially similar and, so the Funometer has seldom been used since and is not discussed further here” (Read & MacFarlane ….., p. 3).
The strength of using the Fun Toolkit method is the fact that it is specifically designed for children, based on their needs and skills in order to collect opinions on their experiences with interactive technologies. The disadvantage of using the method is limited data which can be obtained due to the approach in order to address children as evaluators.
The main purpose of the literature review is to study the already conducted relevant research in the field of Fun Toolkit. The review will serve to identify and better understand the tool and to review the ways and methods of its implementation. The main contributors in the studied field are Gavin, Read, Sim, Horton, Janet, Metaxas, MacFarlane, Matthew, Bogdan, Lenhart, Breakwell and others.
It is considered to be common in the field of Child Computer Interaction to find reports about the survey method used with children. In most cases children are asked to share their suggestions and ideas for partially completed or future designs. Examples include the use of surveys (Janet & MacFarlane 2006, p. 81). The past decade has presented a variety of new evaluation methods for evaluating user experience with children (Sim & Horton 2012). However, the results of these existing studies tended to be represented in isolation and ignoring cultural implications. The author made attempts to compare the Fun Toolkit and the cultural effect on game preference. The study was carried out on the basis of 37 children aged 7-9 from a school in Jordan and the UK. The children were asked to play 2 different games on a tablet PC. The researcher captured children’s experiences using the Fun Toolkit. It was found out that Fun Toolkit may be considered to be a tool that measures a valid user experience across cultures. The Fun Toolkit tools have been widely used in different combinations (MacFarlane at al. 2005). Their successful use demonstrates their usefulness and usability. However, in order to test the tool validity, it is necessary to carry out several specific tests of the tools. Read (2008), claims that the answer reliability across the tool is an interesting subject to investigate (p. 124).
Several metrics which are used in interaction design to get data concerning user experience are studied by Read (2012). The scientist suggests using user surveys which should be conveyed after the users have experienced a product to get their level of satisfaction and opinion. The main attention is focused on theSmileyometer, the Fun Toolkit product, which is used to evaluate children’s experience by asking them questions related to their expectations and fun (p. 241). Children which were chosen for the survey were divided into two groups. Both of the groups were giveninteractive technology installations which were appropriate to their age group. They were asked to share their expectations before and after the use of technology. The experiment showed that were some age-related differences in the Smileyometer use to rate their experience and children reflected on user experience after the technology use.It has been concluded that children have high expectations from the technologies and the surveys have proved that modern technologies meet children’s expectations (Read 2012, p. 247). Smileyometers were also studied by Metaxas et al. (2005), who used them to measure experienced and expected fun by surveying children aged 8-12 before and after playing a game. Educational software was investigated by MacFarlane et al. (2005), using Smileyometers and Fun Sorters.
New methods for evaluation of user experience with children have been implemented by (Sim & Horton 2012). However, the results of their studies were reported in isolation of the existing techniques. The scientists attempted to compare two methods which may be used to evaluate children’s experience. 20 children aged 7-8 were asked to play 2 different games and their experience was captured with the help of the suggested evaluation methods including This or That and the Fun Toolkit. The results of the research showed that both tools could establish a preference for a game. However, the researcher identified some inconsistencies between the individual tool results within the This or That method and the Fun Toolkit.
Janet & MacFarlane (2006), study the implementation of survey methods in Child Computer Interaction making an emphasis on the Fun Toolkit. The scientists presented new research studies in the sphere of usefulness and efficacy of the tools. The researchers state that all survey methods, which are used to evaluate the user experience and expectations, rely on the use of a question and answer process (Bogdan & Biklen 1998; Coolican 2004). It is not easy to ask good questions and those which would be easy to understand and interpret. Good understanding of a question-answer process assists researchers in designing surveys (Borgers et al. 2004). Child-Computer Interaction has also been studied by Reada & Markopoulos (2012) and identified as an area of scientific investigations that is related to the phenomena that surround the interaction between communication and computational technologies and children (p.1). The paper provides the analysis of key themes, design, evaluation, key concerns and empirical work within the studied topic. The scientists preset then in a hierarchy to help others to identify a place to work. It has been emphasized that interaction technology is an essential part of the Child Computer Interaction and is mainly focused on methodology. It is caused by the fact that people try to find out the best design and evaluate products aimed at children. The scientists state that new methods should be developed to promote the phenomena study in Child Computer Interaction.
Janet & MacFarlane (2006), suggest different stages of the above mentioned survey process (p. 82). They also speak about the factors which may impact on question answering including reading age, language ability, temperamental effects (self-belief, confidence and the desire to please) and motor skills. The scientists claim that a survey will always be restrictive (Subramanyan et al. 2006). The problem is that many researchers and interactive product developers are unaware of survey design and are invariably involved in producing questions and suggesting answers. It has been stated that in many the questions are asked in such a way which help to get the answer that is wanted. It has been concluded that there are some inherent difficulties with survey methods for children as well as some inadequacies experienced by the survey designers. Hence, the authors made an attempt to discourage survey methods in Child Computer Interaction. Due to the fact that the way in which children are asked questions has an influence on their response reliability, the scientists provide free-recall questions which are thought to be useful with children (p. 82).
Read (2008), studies the Fun Toolkit (v3) and states that the instrument is able to assist developers and researcher to gather children’s opinion concerning technology. The author provides a reflective look at some studies which present the toolkit and finds that the tool may be used to measure the level of user expectations. The researcher provides the reason why it is useful to understand what children think about the product, the techniques and applications. User satisfaction is considered to be the most important factor (p. 119). Four general principles which are related to the survey questions have been designed to obtaing reliable results. It is advised to ask for minimum required information, ensure each test question can be asked and answered as well as minimize the number of questions which may be refused. It has been concluded that the Fun Toolkit is a useful tool with high potential (Read &MacFarlane 2000; 2001; 2006).
Research is a process which defines problems that formulate suggested solutions hypothesis with the help of summarizing, collecting, organizing and evaluating different data with the aim to reach on solutions with careful testing. Hence, it refers to knowledge search, whereas methodology is considered to be a procedure or set of procedures that are used to find answers to problems. According to Collis and Hussey (2009), there are several types of research including analytical, descriptive, explanatory and predictive ones. Analytical research provides a better understanding of the researched concept. Descriptive research uses quantitative techniques for collecting, analyzing and summarizing the data. Predictive research is a speculation about future events. Finally, an exploratory type of research is implemented in case if no any previous research has been done. This research is carried out in the sphere of Fun Toolkit that is used to elicit the children’s learning experience.
The Fun Toolkit method has been used as an evaluating method for eliciting information from children. We have also used interviewing which included a variety of open and closed questions for children of different age groups. The questions were of conversational nature and stared with closed questions as they are easier to answer. Fun Toolkit serves as an instrument that helps researchers and developers to gather information and opinions about technology from children.
It is possible to grasp some insights into the child's evaluation of technology attractiveness, and usefulness while they are interacting with the gadgets when they carry out activities and tasks. Some verbalization techniques including comparative evaluation, thinking aloud, observation methods and recording facial expressions may also be employed. However, methods of verbalization-in-use rely on the child’s ability while interacting with the technology.
In spite of a large number of advantages which are inherent to Fun Toolkit, there are some disadvantaged which have been criticized in the scientific fields. One of the main disadvantages is that the tool is very limited and the most effective only in case it is used with other evaluating methods including interviewing and surveying. It is also said that Fun Toolkit provides a more general approach which is used to address children. The tool serves as a method for building relationship with children and provides a limited number of data.
There is a need for further studies within this field of heuristic evaluations for fun as well as for educational design, particularly critical for children’s products, and experiments involving children as evaluators, rather than as evaluation subjects. Further research is needed to identify any ordering effects within all redundancies and methods within the survey questions. It is also necessary to investigate the relationship between fun, ease of use, usability and user selection which influence the fact whether children would like to use the technology again and whether the product has brought them pleasure.
In addition, it would be interesting to study the reliability across Fun Toolkit. The effectiveness of the tool for the different age groups has not been studied and requires some attention of further research. The use of modern technology is another area for further research which is based on the fact that modern technology may cause an unpredictable response. The study should be done about the stability of the children’s opinions and the effects of prolonged exposure to products or technologies.