User Tools

Site Tools


hcistats:multilevellinear

Multilevel Linear Model


Introduction

Linear Models and Generalized Linear Models (GLM) are a very useful tool to understand the effects of the factor you want to examine. These models are also used for prediction: Predicting the possible outcome if you have new values on your independent variables (and this is why independent variables are also called predictors).

Although these models are powerful for analyzing the data gained from HCI experiments, one concern we have is that they do not carefully handle “repeated-measure”-ness (e.g., individual differences of the participants). Multilevel models can accommodate such differences. Very roughly speaking, it is a repeated-measure version of linear models or GLMs.

A multilevel model is often referred as a “hierarchical,” “random-effect” or “mixed-effect” model. The term of “random effects” is often confusing because it is used to mean different things. In this wiki, I follow Data Analysis Using Regression and Multilevel/Hierarchical Models by Andrew Gelman and Jennifer Hill. I explain what “random effects” and “fixed effects” (opposite to random effects) mean in this page; however, people say different opinions about them (as Gelman and Hill's book explains). So, I won't go into detailed discussions about how we should consider these factors.



High-level Understanding

Before jumping into examples of multilevel linear models, let's have a high-level understanding of multilevel linear models. Let's think about a very simple experiment: Comparing two techniques: Technique A and Technique B. Your measurement is performance time. In your experiment, 10 participants performed some tasks with both techniques; thus, the experiment is a within-subject design. If you do not consider the “Participant” factor, you will do a linear regression analysis where your independent variable is Technique, and if its coefficient is non-zero, you will argue that the difference of the techniques caused some differences in the performance time.

However, this analysis does not fully consider the experiment design you had: the differences between the participants. For example, some participants are more comfortable with using computers than the others, and thus, their overall performance might have been better. Or the differences of the techniques might have caused different levels of the effects depending on the participants. Some participants had similar performance with both techniques, and some had much better performance with one technique. The linear regression above tries to represent the data with one line, and unfortunately it aggressively aggregates such differences which may matter to your results in this case.

Multilevel regression, intuitively, allows us to have a model for each group represented in the within-subject factors. Thus, in this example, instead of having one linear model, you will build 10 linear models, each of which is for each participant, and do analysis on whether the techniques caused differences or not. In this way, we can also consider individual differences of the participants (they will be described as differences of the models). What multilevel regression actually does is something like between completely ignoring the within-subject factors (sticking with one model) and building a separate model for every single group (making n separate models for n participants). But I think this exaggerated explanation well describes how multilevel regression is different from simple regression, and is easy to understand.



Varying-intercept vs. varying-slope

The previous section gave you a rough idea of what multilevel models are like. For the factors in which we want to take individual differences into account, we treat them as random effects and build each model for each level of these factors. But one question is still remaining. How do we make “different models”?

If we build a separate model for each participant, for example, analysis would be very time-consuming. With the example we used above, we would have 10 models in total. Some may have significant effects of Technique, and some may not. In that case, how can we generalize the results and say if Technique is really a significant factor?

Multilevel models can remove this trouble. Instead of building completely different models, multilevel regression changes the coefficients of only some parameters in the model for each level of random effects. Thus, the coefficients of the other factors remain the same, and model analysis becomes much easier.

Roughly speaking, there are two strategies you can take for random effects: varying-intercept or varying-slope (or do both). Varying-intercept means differences in random effects are described as differences in intercepts. For example, in the previous example, we will have 10 different intercepts (each for each participant), but the coefficient for Technique is constant. Varying-slope means vice versa: changing the coefficients of some factors.

In many cases, factors, more precisely independent variables or predictors, are something you want to examine. Thus, you want to generalize results for them. And the intercept is usually something you don't include in your analysis, so it can be very complicated. Therefore, unless you have some clear reasons, varying-intercept models will work for you. They won't be computationally complicated and their results will be straightforward to interpret. In this page, I show an example of varying-intercept models.



Fixed effects vs. random effects

Although I won't go into detailed discussions about the difference between “random effects” and “fixed effects” (the opposition to random effects), it is important to have a high-level understanding of their differences. Then you won't get confused when you read other literature or try to use other statistical software.

This is my interpretation of differences between fixed and random effects: In multilevel regression, you will build multiple models. The coefficients of the fixed effects are constant or “fixed” across the models. In contrast, the coefficients of the random effects can be different, or (more or less) can be “random”. Random effects can be factors whose effects you are not interested in but whose variances you want to remove from your model. “Participants” are a good example of random effects. Generally, we are not interested in how different the performance of each participant is. But we do not want to let the individual differences of the participants affect the analysis.

If you know a better way to understand the difference between fixed effects and random effects, please share it with us! :)



R example

I prepare hypothetical data to try out multilevel linear regression. You can download it from here. We are going to use that file in the following example.

Let me explain a hypothetical context of this hypothetical data :). We conducted an experiment with a touch-screen desktop computer. Our objective is to examine how mouse-based and touch-based interactions affect performance time in different applications. In this system, participants could use either mouse click or direct touch to select an object or an item in a menu. They could also use a mouse wheel or a pinch gesture to zoom in/out the screen. We just let them which way to interact with the system so that we could measure how people tend to use mouse-based and touch-based interactions. Our design is also within-subject across the three applications tested in this experiment.

The file contains the results of this experiment. I think most of the columns are just guessable. Time is the time (sec) for completing the task in each application (indicated by Application). MouseClick, Touch, MouseWheel, and PinchZoom are the counts for mouse clicks, direct touch, zoom with the mouse wheel, and zoom with the pinch gesture.

Now we want to examine how these numbers of MouseClick, Touch, MouseWheel, and PinchZoom affect performance time. Of course, there are a number of models we can think of, but let's try something simple:

Time = intercept + a * Application + b * MouseClick + c * Touch + d * MouseWheel + e * PinchZoom.

However, we want to take the effects of our experimental design into account. To do this, we make a tweak on the model above.

Time = intercept + a * Application + b * MouseClick + c * Touch + d * MouseWheel + e * PinchZoom + Random(1|Participant).

What Random(1|Participant) is trying to mean is that we are going to change the intercept for each participant. Yes, we are making varying-intercept models. We assume that individual differences by participants can be explained by differences in the intercept. In this manner, we can remove some effects caused by the individual differences to the other factors.

There are a number of ways to do multilevel linear regression in R, but we are using the lme package. We also import the data.

data <- read.csv("example_multilinear.csv", header=T) library(lme4)

Now we are building the model.

model <- lmer(Time ~ Application + MouseClick + Touch + MouseWheel + PinchZoom + (1|Participant), data=data, REML=T)

Again, (1|Participant) is the part for the random effect. “1” means the intercept. So this means we are changing the intercept for each participant. To find the models, we use the restricted maximum likelihood (REML). I won't go into the details of REML here, but in most cases, you can simply use REML. But if you have specific reasons, you can use the maximum likelihood method by doing REML=F.

Now let's take a look at the results.

summary(model) Linear mixed model fit by REML Formula: Time ~ Application + MouseClick + Touch + MouseWheel + PinchZoom + (1 | Participant) Data: data AIC BIC logLik deviance REMLdev 332.5 346.7 -157.2 345.8 314.5 Random effects: Groups Name Variance Std.Dev. Participant (Intercept) 451.06 21.238 Residual 769.20 27.734 Number of obs: 36, groups: Participant, 12 Fixed effects: Estimate Std. Error t value (Intercept) 378.3849 27.9025 13.561 ApplicationB -0.1878 12.3886 -0.015 ApplicationC 3.0063 11.8937 0.253 MouseClick 3.4817 1.5274 2.280 Touch -1.0565 1.8330 -0.576 MouseWheel 2.3630 2.9034 0.814 PinchZoom 0.9565 2.3469 0.408 Correlation of Fixed Effects: (Intr) ApplcB ApplcC MsClck Touch MosWhl ApplicatinB 0.041 ApplicatinC -0.380 0.453 MouseClick -0.699 -0.316 0.091 Touch -0.172 -0.043 0.106 -0.069 MouseWheel -0.339 0.182 0.222 0.060 -0.330 PinchZoom -0.293 -0.115 -0.047 -0.132 -0.062 -0.023

So you can see the estimated coefficient for each factor, but it is kinda unclear whether it is really significant or not. Let's try coefplot. Unfortunately coefplot in the arm package does not work with the lme object. We thus use a fixed version of coefplot.

source("http://www.math.mcmaster.ca/~bolker/classes/s756/labs/coefplot_new.R") coefplot(model)

align="center">

A thick and thin line represent the 1SD and 2SD ranges. So it looks like that MouseClick has a significant effect because its 2SD does not overlap the zero.



MCMC

So far, so good. We successfully created a model and looks like we have something interesting there. But we are not quite sure about which fixed effects are significant yet.

If you have read the multiple linear regression page, you may think we can do an ANOVA test. Technically yes, we can do an ANOVA test. However, it is not quite straightforward to run it because of random effects. In this case, we cannot really be sure about whether the test statistic is F-distributed. There have been several attempts to address this and make an ANOVA test useful for multilevel regression, such as the Kenward-Roger correction. However, it is disputable if this correction is good enough so that we can assume the corrected test statistic is F-distributed. Thus, ANOVA with the Kenward-Roger correction has not been implemented yet in R (as of June 2011).

Instead, we use the Markov Chain Monte Carlo (MCMC) method to estimate the coefficient and highest probability density credible intervals (HPD credible intervals). Here, I will skip detailed discussions of what MCMC does and what HPD credible intervals mean. (I will do it sometime later at a separate page). For now, let's simply think that MCMC tries to re-estimate the coefficient for each factor based on the results we got with lmer() so that we can have better estimation.

We are going to use the languageR package to run MCMC. pvals.fnc() can take a bit of time for calculation.

library(languageR) mcmc <- pvals.fnc(model, nsim=10000) mcmc$fixed Estimate MCMCmean HPD95lower HPD95upper pMCMC Pr(>|t|) (Intercept) 378.3849 370.3576 308.7679 433.082 0.0001 0.0000 ApplicationB -0.1878 -0.9891 -30.7150 28.146 0.9552 0.9880 ApplicationC 3.0063 4.2712 -23.8443 33.534 0.7670 0.8022 MouseClick 3.4817 4.0522 0.7503 7.630 0.0214 0.0302 Touch -1.0565 -0.5666 -4.8117 3.845 0.7806 0.5688 MouseWheel 2.3630 2.7309 -3.8874 9.867 0.4238 0.4223 PinchZoom 0.9565 0.1278 -5.1017 5.276 0.9602 0.6866

The parameter nsim is the number of the simulation to run. Here, I set 10000, but you may need to tweak it to make sure that the estimation is converged.

MCMCmean is the re-estimated coefficient by MCMC. As you can see in this example, you may see relatively large differences between the estimation by REML and the one by MCMC. As far as I understand, the estimation by MCMC is more reliable than the one by REML. So we are going to use the results by MCMC.

As you can see in the results, only MouseClick has a significant positive effect on increasing performance time. So the results imply that reducing the number of mouse clicks may decrease the overall task completion time in the applications tested here. PinchZoom's effect (0.1278, 95% HPD CI:-5.1017,5.276) is smaller than MouseWheel (2.7309, 95% HPD CI:-3.8874, 9.867). Thus, encouraging users to do pinching gestures for zoom operations might contribute to decrease in the overall task completion time.

Unfortunately, there aren't many things to say from the results here, but I guess you have gotten the idea of how you interpret the results of multi-level linear models.



Multicollinearlity

Lastly, let's make sure that we don't have multicollinearity problems. For lmer(), we cannot use the vif()function. Instead, we can use the function provided by Austin F. Frank (https://hlplab.wordpress.com/2011/02/24/diagnosing-collinearity-in-lme4/).

Copy the part of the vif.mer() function (or just copy all) in https://raw.github.com/aufrank/R-hacks/master/mer-utils.R, and paste them into your R console. And just use the vif.mer() function.

vif.mer(model) MouseClick Touch MouseWheel PinchZoom 1.059243 1.176954 1.152678 1.044505

So, in this example, we are fine. If the value is higher than 5, you probably should remove that factor, and if it is higher than 2.5, you should consider removing it.

Another way to examine multicollinearlity is to look at the correlation of the two of the independent variables. I wrote a code to do this quickly.

panel.cor <- function(x, y, digits=3, prefix="", cex.cor, ...) { usr <- par("usr"); on.exit(par(usr)) par(usr = c(0, 1, 0, 1)) r <- cor(x, y,use="complete.obs") txt <- format(c(r, 0.123456789), digits=digits)[1] prefix <- "r = " prefix2 <- "\nCI lower = " prefix3 <- "\nCI upper = " prefix4 <- "\np = " rc <- cor.test(x,y) rci <- rc$conf.int rcp <- rc$p.value star <- symnum(rcp, corr = FALSE, na = FALSE, cutpoints = c(0, 0.001, 0.01, 0.05, 0.1, 1), symbols = c("***", "**", "*", ".", " ")) txt2 <- format(c(rci, 0.123456789), digits=digits)[1] txt3 <- format(c(rci, 0.123456789), digits=digits)[2] txt4 <- format(c(rcp, 0.123456789), digits=digits)[1] txt <- paste(prefix, txt, prefix2, txt2, prefix3, txt3, prefix4, txt4, " ", star, sep="") if(missing(cex.cor)) cex.cor <- 0.8/strwidth(txt) text(0.5, 0.5, txt, cex = 1) }

Copy the code above, and paste it into your R console. Then, run the following code.

pairs(data[,4:7], lower.panel=panel.smooth, upper.panel=panel.cor)

You will get the following graph.

It is kinda hard to say which variable we should consider to remove from this, but I would say if you see any significant strong correlation, you probably should be careful about if it may cause multicollinearity.



How to report

Once you are done with your analysis, you will have to report the following things:

  • Your model,
  • How you chose the model (either you just defined, you used AIC, or you used CV),
  • R-squared and adjusted R-squared (lmer() does not provide them, and I am looking into how to get them).
  • How you tested the estimated coefficients (either you did MCMC or ran a F-test with the Kenward-Roger correction).
  • Estimated value for each coefficient, and
  • Significant effects with p value (if any).

For the last two items, you can say something like “MouseClick showed a significant effect (p<0.05). Its estimated coefficient with the MCMC method was 4.05 (95% HPD credible interval = [0.75, 7.63]).”

It is probably acceptable that you simply report the direction of each significant effect (positive or negative) if you do not really care about the actual value of the coefficient. But I think you should report at least whether each significant effect contributes to the dependent variable in a positive or negative way.


Comments

Guest, 2016/01/02 14:03
How would you create a plot showing how an outcome variable (y) varies as a function of a predictor variable (x), whilst inter-item and inter-subject variability are modelled as random effects?
NatashaOthen, 2017/03/24 00:15
<a href=https://loveawake.ru>dating frenzy android </a>
Azizacrark, 2017/05/12 03:03
Приветствую пользователей ресурса! Представляем досуг с элитными девушками все подробности можно узнать по мылу dosug-elitei@mail.ru
Guest, 2017/09/10 02:47
Thank you for your great tutorial. I am learning a lot from this website. Just for your information, mcmc is no longer supported from the package so I guess that the example in this page should be updated.
SaritWax, 2018/01/07 20:48
Symbolism is a 19th-century movement in which art became infused with exaggerated sensitivity and a spooky mysticism. It was a continuation of the Romantic tradition, which included such artists as John Henry Fuseli and Caspar David Friedrich.

Anticipating Freud and Jung, the Symbolists mined mythology and dream imagery for a visual language of the soul. More a philosophical approach than an actual style of art, they influenced their contemporaries in the Art Nouveau movement and Les Nabis.
for More information <a href=http://canvas-online.com/>click here</a>
EranZew, 2018/01/20 14:37
Post-Impressionism (also spelled Postimpressionism) is a predominantly French craft displacement that developed severely between 1886 and 1905, from the form Impressionist fair to the birth of Fauvism. Post-Impressionism emerged as a effect against Impressionists' be of importance with a view the naturalistic depiction of taper and colour.
- Comprehend more at <a href=http://www.fujianxiangzhu.com>Art Website</a>
Esoffartitiack, 2018/02/22 02:01
ZAŚ. P. oprowadzałby fura w taki modus, iżby pozostawić na drodze widoczny wyryty w gruncie ślad kół samochodowych. Widząc to pozwana zagrabiała te wykładniki wyrównując koleiny, potem natryskiwałaby spodnia warstwa, ażeby wschodziłaby na przedtem zielsko. Zdarzały się sytuacje, że powódka protokołowałaby pozwaną telefonem komórkowym, podczas gdy ta piła w ogrodzie kawę ewentualnie pełniłaby produkcje koło domu. Było także w rzeczy samej, że powódka dokumentowała, kiedy pozwana wymieniała dachówkę (mającą znajdować się <a href=https://is.gd/JVXlZ7>prawo karne uj</a> jak budulec aż do utwardzenia drogi). W następnej kolejności E. P. pod spodem pretekstem wywożenia spośród dzierżawy pozwanej zdaje się eternitu apelowałaby na punkt policję.W kazusie umowy poręczenia nie dochodzi do przeniesienia wierzytelności na konsekwencja czynności prawnej wierzyciela, wszak aż do nabycia pożyczce z mocy taż uprawnienia z wykorzystaniem figurę trzecią (poręczyciela), która spłaca wierzyciela (art. 518 § 1 KC). Ma działkę Sąd Okręgowy podnosząc, iż nie można utożsamiać dwóch samodzielnych instytucji upoważnienia obywatelskiego w sprawy, kiedy ustawodawca w tryb niezatarty je różnicuje (uchwała SN z dnia 24 kwietnia 2008 r., sygn. akt IV CSK 39/08; uchwała SN spośród dnia 28 listopada 2003 r., sygn. punkt programu IV CK 232/02).Pod ręką liczeniu zachowku nie zważa się zapisów zwykłych tudzież poparć, natomiast dolicza się aż do stoku, godnie aż do rozkazów poniższych, subwencji oraz spadki windykacyjne dokonane z wykorzystaniem.
HuriTusa, 2018/02/28 07:19
dlatego że nie będę ćwiczyć! W tymiż szeregach weryfikujemy w sumie, co potrafimy z dotychczasowych zabawy tego osobniku – toż zoczyło się też miejsce na parę nowiny, tudzież kopii bez trudu się wybacza, widząc je w finiszu w milszej <a href=http://gracze.xyz>warto zajrzec</a> obwolucie graficznej. Szwank wyłącznie, iż herosi nie przemawiają w pospolity procedura – w zastępstwie tego potępieni istniejemy na sekwencje lekko prowokujących popiskiwań zaś odrębnych dźwięków, a adekwatnie dialogi kapujemy zaledwie w odmiany pisemnej. Całość istnieje bez wątpienia przygodnie linia prosta, lecz stanowi atrakcyjnego atrakcyjność <a href=http://gracze.xyz>recenzje czytelnikow</a> przede wszystkim w zjednoczeniu spośród eksploracją dość szerokiego budynku. Wskakując na stopniowego goomby, zyskujemy alternatywa przybycia aż do znaczniej położonych półek. Razów do opanowania jest w rzeczy samej do licha i trochę, iż jest dozwolone się ich wykładać oraz uszlachetniać konsumuje przy użyciu dziesiątki pory. Za sprawą większa część terminu wcielamy się w postać szeregowego Ronalda Danielsa spośród 16 Pułku 1 Dywizji Piechoty, któremu akompaniujemy odkąd <a href=http://gracze.xyz>magazyn o grach</a> dostawanie mostu na Renie w marcu 1945 roku. Klęskę konotuje oraz odwiedzenie w przepaść czy też do wacie. Autorzy wyszli wyraźnie z domniemania, że wystarczy wskazać palcem złośliwa a to załatwi kwestię.
Diakimmeda, 2018/03/01 01:38
ponieważ nie będę ćwiczyć! W toż szeregach weryfikujemy łącznie, co umiemy spośród ówczesnych uciechy tego gatunku – toż wykryłoby się również punkt na kilka nowości, natomiast repety łatwo się wybacza, widząc spożywa w finiszu w zdrowszej <a href=http://grykomputerowe.xyz>blog o grach ps4</a> obwolucie graficznej. Szkoda tylko, iż protagoniści nie przemawiają w zwykły postępowanie – zamiast tego skazani jesteśmy na serie lekko prowokujących popiskiwań natomiast nieodrębnych dźwięków, a właściwie dialogi przenikamy wyłącznie w krzepie pisemnej. Agregat istnieje oczywiście stereotypowo prosta, jednakże stanowi interesujące atrakcyjność <a href=http://gracze.xyz>internet na karte</a> przede wszystkim w połączeniu spośród eksploracją wcale masywnego budynku. Wskakując na stopniowe goomby, wygrywamy możliwość zajścia do wyżej stawianych półek. Razy aż do najścia istnieje w samej rzeczy multum, iż wolno się ich instruować tudzież uszlachetniać spożywa za pomocą dziesiątki pór. Za sprawą większość terminu wcielamy się w kreacja szeregowego Ronalda Danielsa spośród 16 Pułku 1 Dywizji Piechoty, któremu asystujemy od <a href=http://www.grywideo.xyz>test tanie laptopy</a> ujarzmianie mostu na Renie w marcu 1945 roku. Stratę zdradza dodatkowo wstąpienie w urwisko bądź do wacie. Literaci opuścili najwyraźniej z domniemania, iż wystarczy dać do zrozumienia palcem wroga i to sprzątnie sytuację.
NekneiTe, 2018/03/07 03:09
albowiem nie będę próbować! W temuż ciągach próbujemy razem, co znamy z poprzednich uciech tego modela – jednakże odkryłoby się również miejsce na kilka nowości, i repety bez trudności się wybacza, widząc konsumuje w końcu w milszej <a href=http://www.grywideo.xyz/dyskusje-o-spolecznosci-internetowej/>hardware monitor</a> oprawie graficznej. Strata tylko, iż herosi nie przemawiają w szablonowy procedura – miast tego skazani istniejemy na partie nieco wyzywających popiskiwań natomiast odmiennych dźwięków, zaś trafnie dialogi przenikamy wyłącznie w odmiany pisemnej. Zespół jest na pewno powierzchownie linia prosta, niemniej jednak stanowi kuriozalnego atrakcyjność <a href=http://www.grywideo.xyz/dyskusje-o-spolecznosci-internetowej/>blog o grach pc</a> głównie w skręceniu z eksploracją wcale przeważającego budynku. Wskakując na dodatkowego goomby, zarabiamy możliwość dojścia aż do znaczniej znoszonych półek. Razy aż do najścia istnieje w rzeczy samej wiele, iż można się ich uczyć tudzież uszlachetniać je z wykorzystaniem dziesiątki godzin. Za sprawą większa część frazeologizmu wcielamy się w bohater literacki szeregowego Ronalda Danielsa z 16 Pułku 1 Dywizji Piechoty, któremu asystujemy od czasu <a href=http://www.grywideo.xyz/mafia-2/>podaje namiar na www</a> ujarzmianie mostu na Renie w marcu 1945 roku. Stratę świadczy także wparowanie w urwisko lub aż do wacie. Skrybowie opuścili najwyraźniej z domniemania, iż wystarczy wskazać palcem antagonistyczna i to pozyska sprawę.
kamagra 100mg, 2018/03/29 03:56
kamagra 100mg oral jelly sildenafil ajanta
<a href="http://kamagradxt.com/">kamagra oral jelly available in india</a>
kamagra oral jelly wirkung frauen
<a href="http://kamagradxt.com/">kamagra chewable 100 mg reviews</a>
super kamagra kaufen
http://kamagradxt.com/
kamagra forum srpski
kamagra gold, 2018/03/30 21:30
kamagra 100 chewable
<a href="http://kamagradxt.com/">kamagra oral jelly review forum</a>
kamagra oral jelly viagra
<a href="http://kamagradxt.com/">buy kamagra 100 mg</a>
kamagra oral jelly buy online
http://kamagradxt.com/
india kamagra 100mg chewable tablets
NormaTup, 2018/04/05 04:53
Investments in cryptocurrency - https://bit.ly/2uCcLt2!

10%-15% Daily Profit! Earn your bitcoins today, become a millionere tomorrow!

Affiliate program and referral commision 5%. Invite a freind and earn 5% commision from each deposit!

<a href="https://bit.ly/2uCcLt2">CLICK HERE!</a>
===============================
Инвестиции в криптовалюту - https://bit.ly/2uCcLt2!

10%-15% ежедневного дохода! Заработайте свои биткойны сегодня, станьте миллионером завтра!

Партнерская програма и реферальная коммиссия 5%. Пригласите друга и получите 5% с каждого депозита!

<a href="https://bit.ly/2uCcLt2">ПЕРЕЙТИ НА САЙТ!</a>
Please enter your comment. You cannot remove your comments by yourself. So double-check before you submit.:
If you can't read the letters on the image, download this .wav file to get them read to you.
 
hcistats/multilevellinear.txt · Last modified: 2014/08/14 05:26 by Koji Yatani