[R-meta] GUI for conventional, 3 level, and 3 level with CHERVE meta-analysis

Noah Schroeder 4no@h@@chroeder @end|ng |rom gm@||@com
Wed May 15 18:22:49 CEST 2024


Hi Everyone,
 I would like to share an open-source project I've been working on for
feedback. This is the first coding project I've ever done so am both
nervous and excited to share it given that I'm not that good with R.
But if I don't share it for feedback, how can I ever learn and
improve?

 Actionable point in case you don't feel like reading the whole
message: I've built/am building an opensource GUI based on
metafor/clubSandwich so non-R using meta-analysts can use more recent
methods than conventional meta-analysis. I would appreciate any
feedback on the GUI and especially the validation codes to help make
sure I didn't make any mistakes, particularly with the 3 level CHE RVE
meta-analysis codes.

GUI (limited to 25hrs/mo at the moment, please close it when
finished): https://noahschroeder.shinyapps.io/SimpleMeta-Analysis/  -
I will get this on a server with more uptime when its ready for
broader release, but I need to find a solution that isn't very
expensive (please feel free to message me off-list about this if you
have ideas - i'm considering https://www.polished.tech/, digital
ocean, and AWS at the moment).

Github: https://github.com/noah-schroeder/simple_meta-analysis/ - full
application repo that also contains sample data sets and what i'm
calling 'validation codes', which are the standard R codes for each
type of analysis. Results from the app should match the results from
the validation codes with the exception of on CHE RVE 3 level
analyses, the GUI will remove levels of the moderator with only one
comparison before running the analysis, which can potentially lead to
differing results, but the app is open about what it is doing and why.

   -If you're not familiar with Shiny, you can download everything in
the repo, install the necessary packages, open app.r, and run that in
R studio and it should launch the app locally.

I do not get offended easily - I appreciate any and all feedback,
either here or to me directly.

Wolfgang and James, I sincerely hope that you do not mind that i'm
working on this project as I did not check with either of you first,
but my goal is to build on the foundation you've laid with your R
packages to help your work be more accessible to those who struggle
with or have no interest in coding.

 Specific questions/notes:
-As i noted in a previous message to the board, I am a completely
self-taught R user. I would appreciate a second (or third, fourth,
etc.) set of eyes on the validation codes.

-I know right now the GUI only scratches the surface of what is
possible. I tried to build the 3 types of analyses people in my field
would/could/should be using most frequently as the initial prototype.
I plan to expand the GUI to new analyses as time goes on.

-I realize that not all the information from metafor output is
provided to the user. I was trying to avoid information overload and
present information in a similar way to as users would be used to from
other GUI-based programs. If i missed things you feel are critical,
please let me know. I think I will be adding residual heterogeneity to
the output for moderators.

-I have a reasonably long list of things I'd like to add to the
software (e.g., metaregression with more variables than only one,
interaction effects, etc.), but for now i'm focusing on existing
functionality and usability. Please feel free to suggest additions
but, and I say this with all due respect, please know that right now
my priority is existing functionality and accuracy.

-Big picture question - is this something you think your non-coding
but interested in meta-analysis colleagues/students might be
interested in using?

If you read this far - thank you. I am really excited about this as I
think it has the potential to hopefully help a lot of people do their
analyses faster and help them use more appropriate methods than they
may currently use if they don't code in R. Thank you in advance and I
look forward to your honest feedback either here on the list or to my
personal email.

Sidenote - I know the shiny code is a bit sloppy, likely redundant in
places, likely less than optimized code, etc. I started this with
essentially no shiny knowledge and learned a ton along the way, so its
probably pretty obvious where I started and finished if you review the
shiny code haha. If you're interested in helping to validate the
analyses, I think reviewing the validation scripts is probably more
helpful than the shiny code because with the data set I tried
everything matches between the app and the shiny results.

Best Regards,
Noah



More information about the R-sig-meta-analysis mailing list